StartGroepenDiscussieMeerTijdgeest
Doorzoek de site
Onze site gebruikt cookies om diensten te leveren, prestaties te verbeteren, voor analyse en (indien je niet ingelogd bent) voor advertenties. Door LibraryThing te gebruiken erken je dat je onze Servicevoorwaarden en Privacybeleid gelezen en begrepen hebt. Je gebruik van de site en diensten is onderhevig aan dit beleid en deze voorwaarden.

Resultaten uit Google Boeken

Klik op een omslag om naar Google Boeken te gaan.

Bezig met laden...

Distributed optimization and statistical learning via the alternating direction method of multipliers

door Stephen Boyd

LedenBesprekingenPopulariteitGemiddelde beoordelingDiscussies
4Geen3,433,292GeenGeen
Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers argues that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas-Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ?1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, it discusses applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. It also discusses general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.… (meer)
Onlangs toegevoegd doortdietterich, jukofyork
Geen
Bezig met laden...

Meld je aan bij LibraryThing om erachter te komen of je dit boek goed zult vinden.

Op dit moment geen Discussie gesprekken over dit boek.

Geen besprekingen
geen besprekingen | voeg een bespreking toe
Je moet ingelogd zijn om Algemene Kennis te mogen bewerken.
Voor meer hulp zie de helppagina Algemene Kennis .
Gangbare titel
Oorspronkelijke titel
Alternatieve titels
Oorspronkelijk jaar van uitgave
Mensen/Personages
Belangrijke plaatsen
Belangrijke gebeurtenissen
Verwante films
Motto
Opdracht
Eerste woorden
Citaten
Laatste woorden
Ontwarringsbericht
Uitgevers redacteuren
Auteur van flaptekst/aanprijzing
Oorspronkelijke taal
Gangbare DDC/MDS
Canonieke LCC

Verwijzingen naar dit werk in externe bronnen.

Wikipedia in het Engels

Geen

Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers argues that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas-Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ?1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, it discusses applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. It also discusses general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.

Geen bibliotheekbeschrijvingen gevonden.

Boekbeschrijving
Haiku samenvatting

Actuele discussies

Geen

Populaire omslagen

Snelkoppelingen

Waardering

Gemiddelde: Geen beoordelingen.

Ben jij dit?

Word een LibraryThing Auteur.

 

Over | Contact | LibraryThing.com | Privacy/Voorwaarden | Help/Veelgestelde vragen | Blog | Winkel | APIs | TinyCat | Nagelaten Bibliotheken | Vroege Recensenten | Algemene kennis | 204,890,059 boeken! | Bovenbalk: Altijd zichtbaar