## Publications

76 results found

Cheraghchi M, 2019, Nearly optimal robust secret sharing, *Designs, Codes and Cryptography*, Vol: 87, Pages: 1777-1796, ISSN: 0925-1022

We prove that a known general approach to improve Shamir’s celebrated secret sharing scheme; i.e., adding an information-theoretic authentication tag to the secret, can make it robust for n parties against any collusion of size δn, for any constant δ∈ (0 , 1 / 2). Shamir’s original scheme is robust for all δ∈ (0 , 1 / 3). Beyond that, we employ the best known list decoding algorithms for Reed-Solomon codes and show that, with high probability, only the correct secret maintains the correct information-theoretic tag if an algebraic manipulation detection (AMD) code is used to tag secrets. This result holds in the so-called “non-rushing” model in which the n shares are submitted simultaneously for reconstruction. We thus obtain a fully explicit and robust secret sharing scheme in this model that is essentially optimal in all parameters including the share size which is k(1 + o(1)) + O(κ) , where k is the secret length and κ is the security parameter. Like Shamir’s scheme, in this modified scheme any set of more than δn honest parties can efficiently recover the secret. Using algebraic geometry codes instead of Reed-Solomon codes, the share length can be decreased to a constant (only depending on δ) while the number of shares n can grow independently. In this case, when n is large enough, the scheme satisfies the “threshold” requirement in an approximate sense; i.e., any set of δn(1 + ρ) honest parties, for arbitrarily small ρ> 0 , can efficiently reconstruct the secret. From a practical perspective, the main importance of our result is in showing that existing systems employing Shamir-type secret sharing schemes can be made much more robust than previously thought with minimal change, essentially only involving the addition of a short and simple checksum to the original data.

Cheraghchi M, Ribeiro J, 2019, Improved Upper Bounds and Structural Results on the Capacity of the Discrete-Time Poisson Channel, Publisher: IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, Pages: 4052-4068, ISSN: 0018-9448

- Author Web Link
- Cite
- Citations: 9

Cheraghchi M, 2019, Expressions for the Entropy of Basic Discrete Distributions, *IEEE TRANSACTIONS ON INFORMATION THEORY*, Vol: 65, Pages: 3999-4009, ISSN: 0018-9448

Cheraghchi Bashi Astaneh M, 2019, Capacity upper bounds for deletion-type channels, *Journal of the Association for Computing Machinery (ACM)*, Vol: 66, Pages: 1-17, ISSN: 0004-5411

We develop a systematic approach, based on convex programming and real analysis, forobtaining upper bounds on the capacity of the binary deletion channel and, more generally,channels with i.i.d. insertions and deletions. Other than the classical deletion channel, wegive a special attention to the Poisson-repeat channel introduced by Mitzenmacher and Drinea(IEEE Transactions on Information Theory, 2006). Our framework can be applied to obtaincapacity upper bounds for any repetition distribution (the deletion and Poisson-repeat channelscorresponding to the special cases of Bernoulli and Poisson distributions). Our techniquesessentially reduce the task of proving capacity upper bounds to maximizing a univariate, realvalued,and often concave function over a bounded interval. The corresponding univariatefunction is carefully designed according to the underlying distribution of repetitions and thechoices vary depending on the desired strength of the upper bounds as well as the desiredsimplicity of the function (e.g., being only efficiently computable versus having an explicit closedformexpression in terms of elementary, or common special, functions). Among our results, weshow the following:1. The capacity of the binary deletion channel with deletion probability d is at most (1 −d) log ϕ for d ≥ 1/2, and, assuming the capacity function is convex, is at most 1−d log(4/ϕ)for d < 1/2, where ϕ = (1 + √5)/2 is the golden ratio. This is the first nontrivial capacityupper bound for any value of d outside the limiting case d → 0 that is fully explicit andproved without computer assistance.2. We derive the first set of capacity upper bounds for the Poisson-repeat channel. Ourresults uncover further striking connections between this channel and the deletion channel,and suggest, somewhat counter-intuitively, that the Poisson-repeat channel is actuallyanalytically simpler than the deletion channel and may be of key importance to a completeunderstanding of

Bui TV, Kuribayashi M, Cheraghchi M, et al., 2019, A framework for generalized group testing with inhibitors and its potential application in neuroscience

The main goal of group testing with inhibitors (GTI) is to efficientlyidentify a small number of defective items and inhibitor items in a large setof items. A test on a subset of items is positive if the subset satisfies somespecific properties. Inhibitor items cancel the effects of defective items,which often make the outcome of a test containing defective items negative.Different GTI models can be formulated by considering how specific propertieshave different cancellation effects. This work introduces generalized GTI(GGTI) in which a new type of items is added, i.e., hybrid items. A hybrid itemplays the roles of both defectives items and inhibitor items. Since the numberof instances of GGTI is large (more than 7 million), we introduce a frameworkfor classifying all types of items non-adaptively, i.e., all tests are designedin advance. We then explain how GGTI can be used to classify neurons inneuroscience. Finally, we show how to realize our proposed scheme in practice.

Lin F, Safavi-Naini R, Cheraghchi M, et al., 2019, Non-Malleable Codes against Active Physical Layer Adversary, IEEE International Symposium on Information Theory (ISIT), Publisher: IEEE, Pages: 2753-2757

Cheraghchi M, Grigorescu E, Juba B,
et al., 2018, AC(0) circle MOD(2 )lower bounds for the boolean inner product, *Journal of Computer and System Sciences*, Vol: 97, Pages: 45-59, ISSN: 0022-0000

AC0◦MOD2circuits are AC0circuits augmented with a layer of parity gates just above the input layer. We study AC0◦ MOD2circuit lower bounds for computing the Boolean Inner Product functions. Recent works by Servedio and Viola (ECCC TR12-144) and Akavia et al. (ITCS 2014) have highlighted this problem as a frontier problem in circuit complexity that arose both as a ﬁrst step towards solving natural special cases of the matrix rigidity problem and as a candidate for constructing pseudorandom generators of minimal complexity. We give the ﬁrst superlinear lower bound for the Boolean Inner Product function against AC0◦ MOD2of depth four or greater. Speciﬁcally, we prove a superlinear lower bound for circuits of arbitrary constant depth, and an ˜(n2) lower bound for the special case of depth-4 AC0◦MOD2.

Lin F, Cheraghchi M, Guruswami V, et al., 2018, Secret sharing with binary shares, Publisher: arXiv

Shamir's celebrated secret sharing scheme provides an efficient method for encoding a secret of arbitrary length ℓ among any N≤2ℓ players such that for a threshold parameter t, (i) the knowledge of any t shares does not reveal any information about the secret and, (ii) any choice of t+1 shares fully reveals the secret. It is known that any such threshold secret sharing scheme necessarily requires shares of length ℓ, and in this sense Shamir's scheme is optimal. The relaxed notion of ramp schemes requires the reconstruction of secret from any t+1+g shares, for a gap parameter g>0. Ramp secret sharing is possible with share lengths depending only on the gap ratio g/N.In this work, we study secret sharing in the extremal case of bit-long shares, where even ramp secret sharing becomes impossible. We show, however, that a slightly relaxed but equally effective notion of semantic security for the secret, and negligible reconstruction error probability, eliminates the impossibility. Moreover, we provide explicit constructions of such schemes. Our relaxation results in separation of adaptive and non-adaptive adversaries. For non-adaptive adversaries, we explicitly construct secret sharing schemes that provide secrecy against any τ fraction of observed shares, and reconstruction from any κ fraction of shares, for any choices of 0≤τ<κ≤1. Our construction achieves secret length N(κ−τ−o(1)) which we show to be optimal. Finally, we construct explicit schemes against adaptive adversaries attaining a secret length Ω(N(κ−τ)). Our work makes a new connection between secret sharing and coding theory, this time wiretap codes, that was not known before, and raises new interesting open questions.

Cheraghchi M, 2018, Expressions for the entropy of binomial-type distributions, IEEE International Symposium on Information Theory (ISIT), Publisher: IEEE, Pages: 2520-2524, ISSN: 2157-8095

We develop a general method for computing logarithmic and log-gamma expectations of distributions. As a result, we derive series expansions and integral representations of the entropy for several fundamental distributions, including the Poisson, binomial, beta-binomial, negative binomial, and hypergeometric distributions. Our results also establish connections between the entropy functions and to the Riemann zeta function and its generalizations.

Bui TV, Kuribayashil M, Cheraghchi M, et al., 2018, Efficiently decodable non-adaptive threshold group testing, IEEE International Symposium on Information Theory (ISIT), Publisher: IEEE, Pages: 2584-2588, ISSN: 2157-8095

We consider non-adaptive threshold group testing for identification of up to d defective items in a set of n items, where a test is positive if it contains at least 2leq uleq d defective items, and negative otherwise. The defective items can be identified using t=O(( frac d u) u(frac d d-u) d-u(ulogfrac d u+logfrac 1 ϵ)d 2log n) tests with probability at least 1-ϵ for any ϵ > 0 or t= Oleft(left(frac b uright) uleft(frac d d-uright) d-ucdot d 3log n cdot log frac n dright) tests with probability 1. The decoding time is ttimes poly (d 2log n). This result significantly improves the best known results for decoding non-adaptive threshold group testing: O(n log n+nlogfrac 1 ϵ) for probabilistic decoding, where ϵ > 0, and O(n ulog n) for deterministic decoding.

Cheraghchi M, Ribeiro J, 2018, Improved capacity upper bounds for the discrete-time poisson channel, IEEE International Symposium on Information Theory (ISIT), Publisher: IEEE, Pages: 1769-1773, ISSN: 2157-8095

We present new capacity upper bounds for the discrete-time Poisson channel with no dark current and an average-power constraint. These bounds are a simple consequence of techniques developed by one of the authors for the seemingly unrelated problem of upper bounding the capacity of binary deletion and repetition channels. Previously, the best known capacity upper bound in the regime where the average-power constraint does not approach zero was due to Martinez (JOSA B, 2007), which we re-derive as a special case of our framework. Furthermore, we instantiate our framework to obtain a closed-form bound that noticeably improves the result of Martinez everywhere.

Cheraghchi Bashi Astaneh M, 2018, Capacity upper bounds for deletion-type channels, 50th Annual ACM SIGACT Symposium on Theory of Computing (STOC 2018 TheoryFest), Publisher: Association for Computing Machinery, Pages: 493-506, ISSN: 0737-8017

We develop a systematic approach, based on convex programming and real analysis, for obtaining upper bounds on the capacity of the binary deletion channel and, more generally, channels with i.i.d. insertions and deletions. Other than the classical deletion channel, we give a special attention to the Poisson-repeat channel introduced by Mitzenmacher and Drinea (IEEE Transactions on Information Theory, 2006). Our framework can be applied to obtain capacity upper bounds for any repetition distribution (the deletion and Poisson-repeat channels corresponding to the special cases of Bernoulli and Poisson distributions). Our techniques essentially reduce the task of proving capacity upper bounds to maximizing a univariate, real-valued, and often concave function over a bounded interval. The corresponding univariate function is carefully designed according to the underlying distribution of repetitions and the choices vary depending on the desired strength of the upper bounds as well as the desired simplicity of the function (e.g., being only efficiently computable versus having an explicit closed-form expression in terms of elementary, or common special, functions).Among our results, we show that the capacity of the binary deletion channel with deletion probability d is at most (1−d) logϕ for d ≥ 1/2, and, assuming the capacity function is convex, is at most 1−d log(4/ϕ) for d<1/2, where ϕ=(1+√5)/2 is the golden ratio. This is the first nontrivial capacity upper bound for any value of d outside the limiting case d → 0 that is fully explicit and proved without computer assistance.Furthermore, we derive the first set of capacity upper bounds for the Poisson-repeat channel. Our results uncover further striking connections between this channel and the deletion channel, and suggest, somewhat counter-intuitively, that the Poisson-repeat channel is actually analytically simpler than the deletion channel and may be of key importance to a complete unders

Chandrasekaran K, Cheraghchi M, Gandikota V,
et al., 2018, Local testing of lattices, *SIAM Journal on Discrete Mathematics*, Vol: 32, Pages: 1265-1295, ISSN: 0895-4801

Testing membership in lattices is of practical relevance, with applications to integer programming, error detection in lattice-based communication, and cryptography. In this work, we initiate a systematic study of local testing for membership in lattices, complementing and building upon the extensive body of work on locally testable codes. In particular, we formally define the notion of local tests for lattices and present the following: 1. We show that in order to achieve low query complexity, it is sufficient to design $1$-sided nonadaptive canonical tests. This result is akin to, and based on, an analogous result for error-correcting codes due to [E. Ben-Sasson, P. Harsha, and S. Raskhodnikova, SIAM J. Comput., 35 (2005), pp. 1--21]. 2. We demonstrate upper and lower bounds on the query complexity of local testing for membership in code formula lattices. We instantiate our results for code formula lattices constructed from Reed--Muller codes to obtain nearly matching upper and lower bounds on the query complexity of testing such lattices. 3. We contrast lattice testing to code testing by showing lower bounds on the query complexity of testing low-dimensional lattices. This illustrates large lower bounds on the query complexity of testing membership in the well-known knapsack lattices. On the other hand, we show that knapsack lattices with bounded coefficients have low-query testers if the inputs are promised to lie in the span of the lattice.

Bui TV, Kuribayashi M, Cheraghchi M, et al., 2018, Efficiently Decodable Non-Adaptive Threshold Group Testing., Publisher: IEEE, Pages: 2584-2588

Cheraghchi M, Ribeiro J, 2018, Sharp Analytical Capacity Upper Bounds for Sticky and Related Channels, Publisher: IEEE

- Author Web Link
- Open Access Link
- Cite
- Citations: 2

Lin F, Safavi-Naini R, Cheraghchi M, et al., 2017, Non-malleable codes with leakage and applications to secure communication, Publisher: arXiv

Non-malleable codes are randomized codes that protect coded messages against modification by functions in a tampering function class. These codes are motivated by providing tamper resilience in applications where a cryptographic secret is stored in a tamperable storage device and the protection goal is to ensure that the adversary cannot benefit from their tamperings with the device. In this paper we consider non-malleable codes for protection of secure communication against active physical layer adversaries. We define a class of functions that closely model tampering of communication by adversaries who can eavesdrop on a constant fraction of the transmitted codeword, and use this information to select a vector of tampering functions that will be applied to a second constant fraction of codeword components (possibly overlapping with the first set). We derive rate bounds for non-malleable codes for this function class and give two modular constructions. The first construction adapts and provides new analysis for an existing construction in the new setting. The second construction uses a new approach that results in an explicit construction of non-malleable codes. We show applications of our results in securing message communication against active physical layer adversaries in two settings: wiretap II with active adversaries and Secure Message Transmission (SMT) in networks. We discuss our results and directions for future work.

Cheraghchi M, Indyk P, 2017, Nearly Optimal Deterministic Algorithm for Sparse Walsh-Hadamard Transform, *ACM TRANSACTIONS ON ALGORITHMS*, Vol: 13, ISSN: 1549-6325

For every fixed constant α > 0, we design an algorithm for computing the k-sparse Walsh-Hadamard transform (i.e., Discrete Fourier Transform over the Boolean cube) of an N-dimensional vector x ∈ RN in time k1 + α(log N)O(1). Specifically, the algorithm is given query access to x and computes a k-sparse &xtilde; ∈ RN satisfying ‖ &xtilde;− &xhat;‖1 ≤ c ‖ &xhat;− Hk(&xhat)‖1 for an absolute constant c > 0, where &xhat; is the transform of x and Hk(&xhat) is its best k-sparse approximation. Our algorithm is fully deterministic and only uses nonadaptive queries to x (i.e., all queries are determined and performed in parallel when the algorithm starts).An important technical tool that we use is a construction of nearly optimal and linear lossless condensers, which is a careful instantiation of the GUV condenser (Guruswami et al. [2009]). Moreover, we design a deterministic and nonadaptive ℓ1/ℓ1 compressed sensing scheme based on general lossless condensers that is equipped with a fast reconstruction algorithm running in time k1 + α(log N)O(1) (for the GUV-based condenser) and is of independent interest. Our scheme significantly simplifies and improves an earlier expander-based construction due to Berinde, Gilbert, Indyk, Karloff, and Strauss [Berinde et al. 2008].Our methods use linear lossless condensers in a black box fashion; therefore, any future improvement on explicit constructions of such condensers would immediately translate to improved parameters in our framework (potentially leading to k(log N)O(1) reconstruction time with a reduced exponent in the poly-logarithmic factor, and eliminating the extra parameter α).By allowing the algorithm to use randomness while still using nonadaptive queries, the runtime of the algorithm can be improved to õ(k log3 N).

Cheraghchi M, Indyk P, 2017, Nearly optimal deterministic algorithm for sparse Walsh-Hadamard transform, *ACM Transactions on Algorithms*, Vol: 13, Pages: 34:1-34:36, ISSN: 1549-6325

For every fixed constant α > 0, we design an algorithm for computing the k-sparse Walsh-Hadamard transform (i.e., Discrete Fourier Transform over the Boolean cube) of an N-dimensional vector x ∈ RN in time k1 + α(log N)O(1). Specifically, the algorithm is given query access to x and computes a k-sparse x˜ ∈ RN satisfying ‖ x˜− xˆ‖1 ≤ c ‖ xˆ− Hk(xˆ)‖‖‖‖‖‖‖‖1 for an absolute constant c > 0, where xˆ is the transform of x and Hk(xˆ) is its best k-sparse approximation. Our algorithm is fully deterministic and only uses nonadaptive queries to x (i.e., all queries are determined and performed in parallel when the algorithm starts).An important technical tool that we use is a construction of nearly optimal and linear lossless condensers, which is a careful instantiation of the GUV condenser (Guruswami et al. [2009]). Moreover, we design a deterministic and nonadaptive ℓ1/ℓ1 compressed sensing scheme based on general lossless condensers that is equipped with a fast reconstruction algorithm running in time k1 + α(log N)O(1) (for the GUV-based condenser) and is of independent interest. Our scheme significantly simplifies and improves an earlier expander-based construction due to Berinde, Gilbert, Indyk, Karloff, and Strauss [Berinde et al. 2008].Our methods use linear lossless condensers in a black box fashion; therefore, any future improvement on explicit constructions of such condensers would immediately translate to improved parameters in our framework (potentially leading to k(log N)O(1) reconstruction time with a reduced exponent in the poly-logarithmic factor, and eliminating the extra parameter α).By allowing the algorithm to use randomness while still using nonadaptive queries, the runtime of the algorithm can be improved to õ(k log3 N).

Richardson TJ, Luby MG, Cheraghchi Bashi Astaneh M, et al., 2017, Systems and methods for verification of code resiliency for data storage, WO 2017039795

Systems and methods which implement forward checking of data integrity are disclosed. A storage system of embodiments may, for example, comprise data integrity forward checking logic which is operable to perform forward checking of data integrity in real-time or near real-time to check that a number of node failures can be tolerated without loss of data. Embodiments may be utilized to provide assurance that a number of fragments needed for source data recovery will be available for the source objects most susceptible to failure when a certain number of additional fragments are lost, such as due to storage node failures.

Richardson TJ, Luby MG, Cheraghchi Bashi Astaneh M, et al., 2017, Systems and methods for verification of code resiliency for data storage, US 20170060700

Systems and methods which implement forward checking of data integrity are disclosed. A storage system of embodiments may, for example, comprise data integrity forward checking logic which is operable to perform forward checking of data integrity in real-time or near real-time to check that a number of node failures can be tolerated without loss of data. Embodiments may be utilized to provide assurance that a number of fragments needed for source data recovery will be available for the source objects most susceptible to failure when a certain number of additional fragments are lost, such as due to storage node failures.

Cheraghchi M, Guruswami V, 2017, Non-malleable coding against bit-wise and split-state tampering, *Journal of Cryptology*, Vol: 30, Pages: 191-241, ISSN: 0933-2790

Non-malleable coding, introduced by Dziembowski et al. (ICS 2010), aims for protecting the integrity of information against tampering attacks in situations where error detection is impossible. Intuitively, information encoded by a non-malleable code either decodes to the original message or, in presence of any tampering, to an unrelated message. Non-malleable coding is possible against any class of adversaries of bounded size. In particular, Dziembowski et al. show that such codes exist and may achieve positive rates for any class of tampering functions of size at most 22αn, for any constant α∈[0,1). However, this result is existential and has thus attracted a great deal of subsequent research on explicit constructions of non-malleable codes against natural classes of adversaries. In this work, we consider constructions of coding schemes against two well-studied classes of tampering functions; namely, bit-wise tampering functions (where the adversary tampers each bit of the encoding independently) and the much more general class of split-state adversaries (where two independent adversaries arbitrarily tamper each half of the encoded sequence). We obtain the following results for these models. (1) For bit-tampering adversaries, we obtain explicit and efficiently encodable and decodable non-malleable codes of length n achieving rate 1−o(1) and error (also known as “exact security”) exp(−Ω̃ (n1/7)). Alternatively, it is possible to improve the error to exp(−Ω̃ (n)) at the cost of making the construction Monte Carlo with success probability 1−exp(−Ω(n)) (while still allowing a compact description of the code). Previously, the best known construction of bit-tampering coding schemes was due to Dziembowski et al. (ICS 2010), which is a Monte Carlo construction achieving rate close to .1887. (2) We initiate the study of seedless non-malleable extractors as a natural variation of the notion of non-m

Chandrasekaran K, Cheraghchi Bashi Astaneh M, Gandikota V, et al., 2016, Local testing for membership in lattices, Foundations of Software Technology and Theoretical Computer Science conference (FSTTCS 2016), Publisher: Schloss Dagstuhl - LZI GmbH, Pages: 23.1-23.32

Testing membership in lattices is of practical relevance, with applications to integer program-ming, error detection in lattice-based communication and cryptography. In this work, we initiatea systematic study oflocal testingfor membership in lattices, complementing and building uponthe extensive body of work on locally testable codes. In particular, we formally define the notionof local tests for lattices and present the following:1.We show that in order to achieve low query complexity, it is sufficient to design one-sidednon-adaptivecanonicaltests. This result is akin to, and based on an analogous result forerror-correcting codes due to Ben-Sassonet al.(SIAM J. Computing 35(1) pp1–21).2.We demonstrate upper and lower bounds on the query complexity of local testing for member-ship incode formulalattices. We instantiate our results for code formula lattices constructedfrom Reed-Muller codes to obtain nearly-matching upper and lower bounds on the querycomplexity of testing such lattices.3.We contrast lattice testing from code testing by showing lower bounds on the query com-plexity of testing low-dimensional lattices. This illustrates large lower bounds on the querycomplexity of testing membership inknapsack lattices. On the other hand, we show thatknapsack lattices with bounded coefficients have low-query testers if the inputs are promisedto lie in the span of the lattice

Chandrasekaran K, Cheraghchi M, Gandikota V, et al., 2016, Local testing for membership in lattices

Testing membership in lattices is of practical relevance, with applications to integer programming, error detection in lattice-based communication and cryptography. In this work, we initiate a systematic study of local testing for membership in lattices, complementing and building upon the extensive body of work on locally testable codes. In particular, we formally define the notion of local tests for lattices and present the following: 1. We show that in order to achieve low query complexity, it is sufficient to design one-sided non-adaptive canonical tests. This result is akin to, and based on an analogous result for error-correcting codes due to Ben-Sasson et al. (SIAM J. Computing, 35(1):1-21). 2. We demonstrate upper and lower bounds on the query complexity of local testing for membership in code formula lattices. We instantiate our results for code formula lattices constructed from Reed-Muller codes to obtain nearly-matching upper and lower bounds on the query complexity of testing such lattices. 3. We contrast lattice testing from code testing by showing lower bounds on the query complexity of testing low-dimensional lattices. This illustrates large lower bounds on the query complexity of testing membership in knapsack lattices. On the other hand, we show that knapsack lattices with bounded coefficients have low-query testers if the inputs are promised to lie in the span of the lattice.

- Abstract
- Open Access Link
- Cite
- Citations: 1

Cheraghchi Bashi Astaneh M, 2016, Nearly optimal robust secret sharing, 2016 IEEE International Symposium on Information Theory, Publisher: IEEE, Pages: 2509-2513, ISSN: 2157-8117

Abstract:We prove that a known approach to improve Shamir's celebrated secret sharing scheme; i.e., adding an information-theoretic authentication tag to the secret, can make it robust for n parties against any collusion of size δn, for any constant δ ∈ (0; 1/2). This result holds in the so-called “nonrushing” model in which the n shares are submitted simultaneously for reconstruction. We thus finally obtain a simple, fully explicit, and robust secret sharing scheme in this model that is essentially optimal in all parameters including the share size which is k(1+o(1))+O(κ), where k is the secret length and κ is the security parameter. Like Shamir's scheme, in this modified scheme any set of more than δn honest parties can efficiently recover the secret. Using algebraic geometry codes instead of Reed-Solomon codes, the share length can be decreased to a constant (only depending on δ) while the number of shares n can grow independently. In this case, when n is large enough, the scheme satisfies the “threshold” requirement in an approximate sense; i.e., any set of δn(1 + ρ) honest parties, for arbitrarily small ρ > 0, can efficiently reconstruct the secret.

Cheraghchi M, Grigorescu E, Juba B, et al., 2016, AC^0 o MOD_2 lower bounds for the Boolean inner product, 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016), Publisher: Schloss Dagstuhl, ISSN: 1868-8969

AC 0 o MOD 2 circuits are AC 0 circuits augmented with a layer of parity gates just above the input layer. We study AC 0 o MOD 2 circuit lower bounds for computing the Boolean Inner Product functions. Recent works by Servedio and Viola (ECCC TR12-144) and Akavia et al. (ITCS 2014) have highlighted this problem as a frontier problem in circuit complexity that arose both as a first step towards solving natural special cases of the matrix rigidity problem and as a candidate for constructing pseudorandom generators of minimal complexity. We give the first superlinear lower bound for the Boolean Inner Product function against AC 0 o MOD 2 of depth four or greater. Specifically, we prove a superlinear lower bound for circuits of arbitrary constant depth, and an Ω(n 2 ) lower bound for the special case of depth-4 AC 0 o MOD 2 . Our proof of the depth-4 lower bound employs a new "moment-matching" inequality for bounded, nonnegative integer-valued random variables that may be of independent interest: we prove an optimal bound on the maximum difference between two discrete distributions' values at 0, given that their first d moments match.

Cheraghchi M, Grigorescu E, Juba B, et al., 2016, AC0(MOD2) lower bounds for the Boolean inner product, 43rd International Colloquium on Automata, Languages and Programming (ICALP 2016)

AC0◦MOD2 circuits are AC0circuits augmented with a layer of parity gates just above the inputlayer. We study AC0◦ MOD2 circuit lower bounds for computing the Boolean Inner Productfunctions. Recent works by Servedio and Viola (ECCC TR12-144) and Akavia et al. (ITCS2014) have highlighted this problem as a frontier problem in circuit complexity that arose bothas a first step towards solving natural special cases of the matrix rigidity problem and as acandidate for constructing pseudorandom generators of minimal complexity. We give the firstsuperlinear lower bound for the Boolean Inner Product function against AC0◦ MOD2 of depthfour or greater. Specifically, we prove a superlinear lower bound for circuits of arbitrary constantdepth, and an Ω( ˜ n2) lower bound for the special case of depth-4 AC0◦ MOD2. Our proof ofthe depth-4 lower bound employs a new “moment-matching” inequality for bounded, nonnegativeinteger-valued random variables that may be of independent interest: we prove an optimal boundon the maximum difference between two discrete distributions’ values at 0, given that their firstd moments match.

Cheraghchi M, Indyk P, 2016, Nearly Optimal Deterministic Algorithm for Sparse Walsh-Hadamard Transform., ACM-SIAM Symposium on Discrete Algorithms (SODA16), Publisher: ACM, Pages: 298-317

For every fixed constant α > 0, we design an algorithm for computing the k-sparse Walsh-Hadamardtransform of an N-dimensional vector x ∈ RN in time k1+α(log N)O(1). Specifically, the algorithm isgiven query access to x and computes a k-sparse x˜ ∈ RN satisfying kx˜ − xˆk1 ≤ ckxˆ − Hk(ˆx)k1, for anabsolute constant c > 0, where xˆ is the transform of x and Hk(ˆx) is its best k-sparse approximation. Ouralgorithm is fully deterministic and only uses non-adaptive queries to x (i.e., all queries are determinedand performed in parallel when the algorithm starts).An important technical tool that we use is a construction of nearly optimal and linear lossless condenserswhich is a careful instantiation of the GUV condenser (Guruswami, Umans, Vadhan, JACM2009). Moreover, we design a deterministic and non-adaptive `1/`1 compressed sensing scheme basedon general lossless condensers that is equipped with a fast reconstruction algorithm running in timek1+α(log N)O(1) (for the GUV-based condenser) and is of independent interest. Our scheme signifi-cantly simplifies and improves an earlier expander-based construction due to Berinde, Gilbert, Indyk,Karloff, Strauss (Allerton 2008).Our methods use linear lossless condensers in a black box fashion; therefore, any future improvementon explicit constructions of such condensers would immediately translate to improved parameters in ourframework (potentially leading to k(log N)O(1) reconstruction time with a reduced exponent in the polylogarithmicfactor, and eliminating the extra parameter α).By allowing the algorithm to use randomness, while still using non-adaptive queries, the runningtime of the algorithm can be improved to O˜(k log3 N).

Cheraghchi M, Indyk P, 2016, Nearly optimal deterministic algorithm for sparse Walsh-Hadamard transform, ACM-SIAM Symposium on Discrete Algorithms (SODA16), Publisher: SIAM, Pages: 298-317

For every fixed constant α > 0, we design an algorithm for computing the k-sparse Walsh-Hadamard transform (i.e., Discrete Fourier Transform over the Boolean cube) of an iV-dimensional vector x ∈ ℝN in time k1+α(log N)°(1) Specifically, the algorithm is given query access to x and computes a k-sparse x ∈ ℝ.N satisfying ||x-◯||1 ≤ c||◯ - Hk(◯)\\1, for an absolute constant c > 0, where x is the transform of x and Hk(x) is its best k-sparse approximation. Our algorithm is fully deterministic and only uses non-adaptive queries to x (i.e., all queries are determined and performed in parallel when the algorithm starts). An important technical tool that we use is a construction of nearly optimal and linear lossless condensers which is a careful instantiation of the GUV condenser (Guruswami, Umans, Vadhan, JACM 2009). Moreover, we design a deterministic and non-adaptive l1/l1 compressed sensing scheme based on general lossless condensers that is equipped with a fast reconstruction algorithm running in time k1+α(logN)O(1) (for the GUV-based condenser) and is of independent interest. Our scheme significantly simplifies and improves an earlier expander-based construction due to Berinde, Gilbert, Indyk, Karloff, Strauss (Allerton 2008). Our methods use linear lossless condensers in a black box fashion; therefore, any future improvement on explicit constructions of such condensers would immediately translate to improved parameters in our framework (potentially leading to k(log N)(1) reconstruction time with a reduced exponent in the poly-logarithmic factor, and eliminating the extra parameter α). By allowing the algorithm to use randomness, while still using non-adaptive queries, the running time of the algorithm can be improved to Ō(k log3 N).

Chandrasekaran K, Cheraghchi M, Gandikota V, et al., 2016, Local Testing for Membership in Lattices., Publisher: Schloss Dagstuhl - Leibniz-Zentrum für Informatik, Pages: 46:1-46:1

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.