This HTML5 document contains 36 embedded RDF statements represented using HTML+Microdata notation.

The embedded RDF content will be recognized by any processor of HTML5 Microdata.

Namespace Prefixes

PrefixIRI
dctermshttp://purl.org/dc/terms/
dbohttp://dbpedia.org/ontology/
foafhttp://xmlns.com/foaf/0.1/
n12https://global.dbpedia.org/id/
dbthttp://dbpedia.org/resource/Template:
rdfshttp://www.w3.org/2000/01/rdf-schema#
freebasehttp://rdf.freebase.com/ns/
dbpedia-fahttp://fa.dbpedia.org/resource/
rdfhttp://www.w3.org/1999/02/22-rdf-syntax-ns#
owlhttp://www.w3.org/2002/07/owl#
wikipedia-enhttp://en.wikipedia.org/wiki/
provhttp://www.w3.org/ns/prov#
dbphttp://dbpedia.org/property/
dbchttp://dbpedia.org/resource/Category:
xsdhhttp://www.w3.org/2001/XMLSchema#
wikidatahttp://www.wikidata.org/entity/
dbrhttp://dbpedia.org/resource/

Statements

Subject Item
dbr:Sample_complexity
rdfs:label
Sample complexity
rdfs:comment
The sample complexity of a machine learning algorithm represents the number of training-samples that it needs in order to successfully learn a target function. More precisely, the sample complexity is the number of training-samples that we need to supply to the algorithm, so that the function returned by the algorithm is within an arbitrarily small error of the best possible function, with probability arbitrarily close to 1. There are two variants of sample complexity:
dcterms:subject
dbc:Machine_learning
dbo:wikiPageID
43269516
dbo:wikiPageRevisionID
1068917677
dbo:wikiPageWikiLink
dbr:Dictionary_learning dbr:Probably_approximately_correct_learning dbr:Model-free_(reinforcement_learning) dbr:Random_variable dbr:No_free_lunch_theorem dbr:Semi-supervised_learning dbr:Online_machine_learning dbr:Reinforcement_learning dbr:Glivenko-Cantelli_class dbr:Monte_Carlo_tree_search dbr:Regularization_(mathematics) dbr:Tikhonov_regularization dbr:Metric_learning dbr:VC_dimension dbr:Vapnik–Chervonenkis_theory dbr:No_free_lunch_in_search_and_optimization dbc:Machine_learning dbr:Empirical_risk_minimization dbr:Active_learning_(machine_learning) dbr:Rademacher_complexity dbr:Machine_learning
owl:sameAs
freebase:m.0114dpwp n12:mVCu wikidata:Q18354077 dbpedia-fa:پیچیدگی_نمونه
dbp:wikiPageUsesTemplate
dbt:Machine_learning_bar dbt:Reflist
dbo:abstract
The sample complexity of a machine learning algorithm represents the number of training-samples that it needs in order to successfully learn a target function. More precisely, the sample complexity is the number of training-samples that we need to supply to the algorithm, so that the function returned by the algorithm is within an arbitrarily small error of the best possible function, with probability arbitrarily close to 1. There are two variants of sample complexity: * The weak variant fixes a particular input-output distribution; * The strong variant takes the worst-case sample complexity over all input-output distributions. The No free lunch theorem, discussed below, proves that, in general, the strong sample complexity is infinite, i.e. that there is no algorithm that can learn the globally-optimal target function using a finite number of training samples. However, if we are only interested in a particular class of target functions (e.g, only linear functions) then the sample complexity is finite, and it depends linearly on the VC dimension on the class of target functions.
prov:wasDerivedFrom
wikipedia-en:Sample_complexity?oldid=1068917677&ns=0
dbo:wikiPageLength
14205
foaf:isPrimaryTopicOf
wikipedia-en:Sample_complexity