2173 Salk Avenue, Suite 250 Carlsbad, CA

support@assignmentprep.info

Does Perceptron make the same number of mistakes?

September 27, 2021
Christopher R. Teeple

1. Consider running the Perceptron algorithm on a training set S arranged in a certain order. Now suppose we run it with the same initial weights and on the same training set but in a different order, S 0 . Does Perceptron make the same number of mistakes? Does it end up with the same final weights? If so, prove it. If not, give a counterexample, i.e. an S and S 0 where order matters.
2. We have mainly focused on squared loss, but there are other interesting losses in machine learning. Consider the following loss function which we denote by φ(z) = max(0, −z). Let S be a training set (x 1 , y1 ), . . . ,(x m, ym) where each x i ∈ R n and y i ∈ {−1, 1}. Consider running stochastic gradient descent (SGD) to find a weight vector w that minimizes 1 m Pm i=1 φ(y i · w T x i ). Explain the explicit relationship between this algorithm and the Perceptron algorithm. Recall that for SGD, the update rule when the i th example is picked at random is wnew = wold − η∇φ  y iw T x i .
3. Here we will give an illustrative example of a weak learner for a simple concept class. Let the domain be the real line, R, and let C refer to the concept class of “3-piece classifiers”, which are functions of the following form: for θ1 < θ2 and b ∈ {−1, 1}, hθ1,θ2,b(x) is b if x ∈ [θ1, θ2] and −b otherwise. In other words, they take a certain Boolean value inside a certain interval and the opposite value everywhere else. For example, h10,20,1(x) would be +1 on [10, 20], and −1 everywhere else. Let H refer to the simpler class of “decision stumps”, i.e. functions hθ,b such that h(x) is b for all x ≤ θ and −b otherwise.
(a) Show formally that for any distribution on R (assume finite support, for simplicity; i.e., assume the distribution is bounded within [−B, B] for some large B) and any unknown labeling function c ∈ C that is a 3-piece classifier, there exists a decision stump h ∈ H that has error at most 1/3, i.e. P[h(x) 6= c(x)] ≤ 1/3. (b) Describe a simple, efficient procedure for finding a decision stump that minimizes error with respect to a finite training set of size m. Such a procedure is called an empirical risk minimizer (ERM).
(c) Give a short intuitive explanation for why we should expect that we can easily pick m sufficiently large that the training error is a good approximation of the true error, i.e. why we can ensure generalization. (Your answer should relate to what we have gained in going from requiring a learner for C to requiring a learner for H.) This lets us conclude that we can weakly learn C using H.

Struggling With a Similar Paper? Get Reliable Help Now.

Delivered on time. Plagiarism-free. Good Grades.

What is this?

It’s a homework service designed by a team of 23 writers based in Carlsbad, CA with one specific goal – to help students just like you complete their assignments on time and get good grades!

Why do you do it?

Because getting a degree is hard these days! With many students being forced to juggle between demanding careers, family life and a rigorous academic schedule. Having a helping hand from time to time goes a long way in making sure you get to the finish line with your sanity intact!

How does it work?

You have an assignment you need help with. Instead of struggling on this alone, you give us your assignment instructions, we select a team of 2 writers to work on your paper, after it’s done we send it to you via email.

What kind of writer will work on my paper?

Our support team will assign your paper to a team of 2 writers with a background in your degree – For example, if you have a nursing paper we will select a team with a nursing background. The main writer will handle the research and writing part while the second writer will proof the paper for grammar, formatting & referencing mistakes if any.

Our team is comprised of native English speakers working exclusively from the United States. 

Will the paper be original?

Yes! It will be just as if you wrote the paper yourself! Completely original, written from your scratch following your specific instructions.

Is it free?

No, it’s a paid service. You pay for someone to work on your assignment for you.

Is it legit? Can I trust you?

Completely legit, backed by an iron-clad money back guarantee. We’ve been doing this since 2007 – helping students like you get through college.

Will you deliver it on time?

Absolutely! We understand you have a really tight deadline and you need this delivered a few hours before your deadline so you can look at it before turning it in.

Can you get me a good grade? It’s my final project and I need a good grade.

Yes! We only pick projects where we are sure we’ll deliver good grades.

What do you need to get started on my paper?

* The full assignment instructions as they appear on your school account.

* If a Grading Rubric is present, make sure to attach it.

* Include any special announcements or emails you might have gotten from your Professor pertaining to this assignment.

* Any templates or additional files required to complete the assignment.

How do I place an order?

You can do so through our custom order page here or you can talk to our live chat team and they’ll guide you on how to do this.

How will I receive my paper?

We will send it to your email. Please make sure to provide us with your best email – we’ll be using this to communicate to you throughout the whole process.

Getting Your Paper Today is as Simple as ABC

No more missed deadlines! No more late points deductions!

}

You give us your assignments instructions via email or through our order page.

Our support team selects a qualified writing team of 2 writers for you.

l

In under 5 minutes after you place your order, research & writing begins.

Complete paper is delivered to your email before your deadline is up.

Want A Good Grade?

Get a professional writer who has worked on a similar assignment to do this paper for you