Search or filter publications

Filter by type:

Filter by publication type

Filter by year:



  • Showing results for:
  • Reset all filters

Search results

  • Journal article
    Filippi S, Barnes CP, Kirk PDW, Kudo T, Kunida K, McMahon SS, Tsuchiya T, Wada T, Kuroda S, Stumpf MPHet al., 2016,

    Robustness of MEK-ERK Dynamics and Origins of Cell-to-Cell Variability in MAPK Signaling

    , CellReports
  • Conference paper
    Joulani P, Gyorgy A, Szepesvari C, 2015,

    Classification with Margin Constraints: A Unification with Applications to Optimization

    , 8th NIPS Workshop on Optimization for Machine Learning

    This paper introduces Classification with Margin Constraints (CMC), a simplegeneralization of cost-sensitive classification that unifies several learning settings.In particular, we show that a CMC classifier can be used, out of the box, to solveregression, quantile estimation, and several anomaly detection formulations. Onthe one hand, our reductions to CMC are at the loss level: the optimization problemto solve under the equivalent CMC setting is exactly the same as the optimizationproblem under the original (e.g. regression) setting. On the other hand,due to the close relationship between CMC and standard binary classification, theideas proposed for efficient optimization in binary classification naturally extendto CMC. As such, any improvement in CMC optimization immediately transfersto the domains reduced to CMC, without the need for new derivations or programs.To our knowledge, this unified view has been overlooked by the existingpractice in the literature, where an optimization technique (such as SMO or PEGASOS)is first developed for binary classification and then extended to otherproblem domains on a case-by-case basis. We demonstrate the flexibility of CMCby reducing two recent anomaly detection and quantile learning methods to CMC.

  • Conference paper
    Hu X, Prashanth LA, Gyorgy A, Szepesvari Cet al., 2015,

    (Bandit) Convex Optimization with Biased Noisy Gradient Oracles

    , 8th NIPS Workshop on Optimization for Machine Learning

    For bandit convex optimization we propose a model, where a gradient estimation oracle acts as anintermediary between a noisy function evaluation oracle and the algorithms. The algorithms cancontrol the bias-variance tradeoff in the gradient estimates. We prove lower and upper bounds forthe minimax error of algorithms that interact with the objective function by controlling this oracle.The upper bounds replicate many existing results (capturing the essence of existing proofs) while thelower bounds put a limit on the achievable performance in this setup. In particular, our results implythat no algorithm can achieve the optimal minimax error rate in stochastic bandit smooth convexoptimization.

  • Conference paper
    Wu Y, György A, Szepesvari C, 2015,

    Online Learning with Gaussian Payoffs and Side Observations

    , 29th Annual Conference on Neural Information Processing Systems (NIPS), Publisher: Neural Information Processing Systems Foundation, Inc.
  • Conference paper
    Calandra R, Ivaldi S, Deisenroth MP, Peters Jet al., 2015,

    Learning Torque Control in Presence of Contacts using Tactile Sensing from Robot Skin

    , 2015 IEEE-RAS International Conference on Humanoid Robots, Publisher: IEEE

    Whole-body control in unknown environments ischallenging: Unforeseen contacts with obstacles can lead topoor tracking performance and potential physical damages ofthe robot. Hence, a whole-body control approach for futurehumanoid robots in (partially) unknown environments needsto take contact sensing into account, e.g., by means of artificialskin. However, translating contacts from skin measurementsinto physically well-understood quantities can be problematicas the exact position and strength of the contact needs to beconverted into torques. In this paper, we suggest an alternativeapproach that directly learns the mapping from both skinand the joint state to torques. We propose to learn suchan inverse dynamics models with contacts using a mixtureof-contactsapproach that exploits the linear superimpositionof contact forces. The learned model can, making use ofuncalibrated tactile sensors, accurately predict the torquesneeded to compensate for the contact. As a result, tracking oftrajectories with obstacles and tactile contact can be executedmore accurately. We demonstrate on the humanoid robot iCubthat our approach improve the tracking error in presence ofdynamic contacts.

  • Conference paper
    Kormushev P, Demiris Y, Caldwell DG, 2015,

    Kinematic-free Position Control of a 2-DOF Planar Robot Arm

  • Conference paper
    Kryczka P, Kormushev P, Tsagarakis N, Caldwell DGet al., 2015,

    Online Regeneration of Bipedal Walking Gait Optimizing Footstep Placement and Timing

  • Journal article
    Calandra R, Seyfarth A, Peters J, Deisenroth MPet al., 2015,

    Bayesian Optimization for Learning Gaits under Uncertainty

    , Annals in Mathematics and Artificial Intelligence, Vol: 76, Pages: 5-23, ISSN: 1012-2443

    Designing gaits and corresponding control policies is a key challenge in robot locomotion. Even with a viable controller parametrization, finding near-optimal parameters can be daunting. Typically, this kind of parameter optimization requires specific expert knowledge and extensive robot experiments. Automatic black-box gait optimization methods greatly reduce the need for human expertise and time-consuming design processes. Many different approaches for automatic gait optimization have been suggested to date. However, no extensive comparison among them has yet been performed. In this article, we thoroughly discuss multiple automatic optimization methods in the context of gait optimization. We extensively evaluate Bayesian optimization, a model-based approach to black-box optimization under uncertainty, on both simulated problems and real robots. This evaluation demonstrates that Bayesian optimization is particularly suited for robotic applications, where it is crucial to find a good set of gait parameters in a small number of experiments.

  • Conference paper
    Deisenroth MP, Ng JW, 2015,

    Distributed Gaussian Processes

    , 2015 International Conference on Machine Learning (ICML), Publisher: Journal of Machine Learning Research

    To scale Gaussian processes (GPs) to large datasets we introduce the robust Bayesian CommitteeMachine (rBCM), a practical and scalableproduct-of-experts model for large-scaledistributed GP regression. Unlike state-of-theartsparse GP approximations, the rBCM is conceptuallysimple and does not rely on inducingor variational parameters. The key idea is torecursively distribute computations to independentcomputational units and, subsequently, recombinethem to form an overall result. Efficientclosed-form inference allows for straightforwardparallelisation and distributed computations witha small memory footprint. The rBCM is independentof the computational graph and canbe used on heterogeneous computing infrastructures,ranging from laptops to clusters. With sufficientcomputing resources our distributed GPmodel can handle arbitrarily large data sets.

  • Journal article
    Carrera A, Palomeras N, Hurtós N, Kormushev P, Carreras Met al., 2015,

    Cognitive System for Autonomous Underwater Intervention

    , Pattern Recognition Letters, ISSN: 0167-8655

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=954&limit=10&page=10&respub-action=search.html Current Millis: 1632813527308 Current Time: Tue Sep 28 08:18:47 BST 2021