How To Flashing itel 1351
keyword : How To Flashing itel 1351 for bootloop , How To Flashing itel 1351 for softbrick , How To Flashing itel 1351 for hardbrick , How To Flashing itel 1351 Error Camera , How To Flashing itel 1351 blank screen , How To Flashing itel 1351 lost password , How To Flashing itel 1351 stuck logo , How To Flashing itel 1351 new 2017. How To Flashing itel 1351 repair phone.
Download one of the above file:
How To Flashing itel 1351
>>yvette nameth: and to end our gtac, we haveone last talk, on test automation for chrome os and partners, by simran and chris.[ applause ] >>chris sosa: so my name is chris sosa. thisis simran bas. we're both software engineers on the chrome os team.today we're going to give a talk on test automation for chrome os.and before we start, we're going to give a sort of quick outline of our agenda.so we're going to start the talk off talking about sort of what chrome os is and sort ofthe history of chromebooks and chromeboxes. then i'm going to follow-up with sort of anoverview of the continuous integration that's sort of required to support the active developmentfor chromebooks.
next, simran is going to start giving an overviewof testing and go into a life of a test. and finally, we're going to finish off withsome of the requirements for our partners who test chromebooks and chromeboxes bothbefore and after they give it to us, as well as moblab, which is a productized versionof our test automation services. so before i start going too far, i want tosort of remind everyone sort of what chrome os is, or for those of you who don't know,tell you for the first time. so chrome os is a linux-based distributionthat powers chromebooks and chromeboxes. it was first created by the chrome folks, whosort of wanted to apply the three tenets of the chrome browser to operating system development.those three tenets are speed, security, and
simplicity.so for an os, speed means fast boot, fast performance. security means being able toapply, you know, zero-day patches, you know, within a day. and simplicity means, you know,basically getting the os out of the way. like, you know, you shouldn't be aware that there'sa bios running. you shouldn't be wearing that linux is running underneath. and most peopledon't -- aren't aware. so quick history and sort of to understandthe scale of the problem. we open sourced in 2009. we shipped our firstchromebook, the cr38, in 2010. since 2010, we basically shipped 2x more devices per year.so at this point, we're actually shipping over 50 distinct chromebook and chromeboxvariations across multiple architectures,
like x86 and arm, and various reference boards.in terms of the development community, we have over 1,000 check-ins per week. not perday. i made that mistake. and across hundreds of projects.and as part of supporting the three tenets of chrome, one of the most important thingsis sort of keeping all the browsers and the operating system up-to-date. so we actuallysupport the new release every six weeks to all chromebooks and chromeboxes in the field.in fact, the cr-48 is still shipping, i guess, chrome 47 at this point.so what kind of development model does it take to support this?being able to ship everything you've ever shipped on physical devices every six weeksis a pretty difficult problem. and on top
of that, we actually have active developmenton all parts of the stack. so we have our own (indiscernible) distribution, which meanswe have an kernel team that does active development on the kernel. so as some of you may know,a kernel kit change can very easily brick a device. and we've actually so far not brickedany devices. so that's good. and then in order to support this and sortof do this right, we've sort of taken sort of this continuous integration model thata lot of web app developers have, which is basically keeping trunk in always a very goodstate and applied that to operating systems. that means we actually have a submit queuethat gates any changes that might break or brick any chromebook or chromebox in activedevelopment. which is, i guess, all of them
right now, because we still haven't hit thefive-year cycle on the cr-48. so, yes, so trunk is always in a near-shippablestate. our branches are only for stabilization. all feature changes and all bug fixes mustland on trunk first. and before we talk about test automation,i want to give you a quick overview of the build system, because that sort of gives youa little bit of context about the test automation, so we have a submit queue, as i mentioned,we actually do some building and testing on all 50 variants in the submit queue. we actuallydo physical device testing as well as emulator testing. so we actually don't have to picka side. we have both physical testing and emulator-based testing. and emulator-basedtesting helps a lot with getting things fast
and quick. physical device testing is addedon the submit queue to help. and it adds a lot of coverage.we also have release builders. these sort of do four canaries a day. they do longertesting because of the requirements in terms of time aren't as intense. like, submit queuegates developers from checking things in. so that's got fob quick. we aim for a 99%coverage on the submit queue f we leave 1% of slower tests to release builds. we alsohave trybots which are basically an infrastructure service that allows you to emulator any ofour bots that do builds or tests so developers can sort of, hey, my submit queue failed,i don't quite understand. i don't have, you know, hey, a cr-48 on mydesk. how do i produce this?
hey, run a trybot, and it will run both thebuild and the test in our physical lab. as a point of reference, we use buildbot,which is an open source, waterfall, continuous integration service. in fact, actually, allof our infrastructure is open source, because all of chrome os is open source. we sort ofinherited that, sort of being part of chrome. so trunk, unlike some other open source projects,is actually open source. so all of our development is always in the public, except for some smallrepos, some (indiscernible) set of repos. and most of the team interacts with our buildsystem either through our code review system, like we post breakages on the changes youupload, or on the buildbot web ui. and this is actually a quick view of whatthe buildbot waterfall looks like.
all of the columns are basically specificbuilds. buildbot wasn't really meant to scale, to have lots and lots of builders on the sameview. so this actually scrolls a very long way to the right.but we do have some quick overviews on top that sort of give you a high-level view.and in terms of what a build does, here's a quick diagram. we sync. we build. and thenwe run a bunch of different testing services in parallel, including physical device testing,which actually leads us into the actual meat of the thing, where we talk about test automationon physical devices. so before i hand it off to simran, i wantto show you sort of a few pictures of our automated test lab.so our automated test lab currently consists
of about 2400 different chromebooks and chromeboxes.right here, you sort of see a lot of, i believe, alex devices, which are a device we shippedfour years ago. and one of the interesting things to point out is that this thing herelooks like a nice exciting mess. that's actually a debug board that we have installed on everydevice. so one of the problems you get when you have thousands of devices under test is,unless you want to hire a lot of people to go there to physically bring them back upwhen they break, like, you need some sort of automated solution to actually automaticallyrepair them. this is only a big issue if you are doingplatform testing because, as i said, any kernel change can sort of brick a device. not everydeveloper on the team has 50 devices at their
desk. that would be kind of crazy. so we needsome way to sort of automatically repair them. so these debug boards are actually connectedto all of our chrombooks and chromeboxs through a debug header and it's able to sort of simulatethe automatic repair flow that if a consumer actually had a bad machine because of eithera small bug or something they would activate this.so the debug board is also used for other development features. it's used for batterytesting and other things. so we actually sort of share it with other development teams,especially on the lower end like the active development of the bios.and then in these two pictures, we show sort of what our racks looked like before on theleft before we add devices. and on the right
actually -- so chromebooks and chromeboxsactively depend on wifi connectivity for use. so we need to find a really good way of doingfunctional and regression testing with wifi and bluetooth. and so these are actually rfisolation chambers, and they do all of our functional testing. they are a little bitexpensive so we try to keep and mock out as much of the wifi parts as possible. but they'reactually capable -- with these, we are actually able to sort of automatically and programmaticallyemulate the change of distance if you are actually able to move a device further awayfrom a router. we actually have both routers and chromebooksand chromeboxs in these things. so with that, i wand to hand it off to simran to talk aboutour test stuff.
>>simran basi: so testing on chrome os isdone by a fork of the autotest testing framework. autotest was originally designed for linuxkernel work, and we kind of forked it for hardware testing.now, the reasons we chose autotest is because it became with a bunch of things we knew wewere going to be need off the bat. to start with, it gave us host and job management.we. have this lab of 2500 devices and we needa simple way to, like, manage each of them, look at their state, see what they're doing,and at the same time manage each job that's running. so autotest gave us all this offthe bat. it gave us a test scheduler to match -- tomake sure that tests actually ran, monitor
their status, and give us results.it came with a web front-end for all this as well so that made it easier for us to getthis up and going. and it came with a number of other services that we ended up using.now, so let me talk a little bit about the different kind of test coverage we do. now,the most standard test that we run the most often is our build validation test. we callit the bvt. and pretty much it does regression testing.now, this is really important test because our submit queue, our canary builds, everythingruns the bvt. so any time a developer wants to submit a change to chrome os, it has topass all our basic regression testing before their change will commit to the tree. thisis how we keep chrome os as green as possible.
then we do actual full builds every day ofall the changes that landed, and that also runs the bvt. if the tree goes red, we stopsubmitting changes and it lets us fix everything up for developers.next we have chrome os release tests. so whenever we want to do a release, we have people onour team, they choose a release, and we run a series of tests against this specific build.the release testing is how -- has to be done before we release anything. and a good exampleof this is our autoupdate testing. we need to make sure that any update that we pushout can also update to a new version should something go wrong. without that, we'll brickdevices and that will be terrible. next we have power tests. so we can actuallyremotely turn off the power ac outlets to
all the different chromebooks and chromeboxs.that allows us to turn off the power and run stress wifi testing and all sorts of testingto see the life of the battery, how long the devices last because this is a pretty bigimportant part of selling laptops. next we do hardware component testing. sobecause with chrome os, we manage the software stack, and google's very closely tied to ourpartners. so any components a partner wants to put in a new chromebook, it has to be onour approved vendor list. so a good example of this would be internal ssds. we'll actuallydo component testing and try to burn a ssd and see how long it lives and makes sure it'sgoing to be good for a chromebook. and, lastly, we have fully automatic firmwaretests. so inside each chromebook is the chrome-embedded
controller. and the firmware test ensuresit can do all the stages of recovery and all the basic bios level stuff that we expecta chromebook to do. so i'm going to go over the different serverswe have in the lab and pretty much a life of a test. so when a user wants to createa test, essentially either through our web front-end or through our suite scheduler orour golo proxy, a test can be created. so an individual developer can request a tstto be kicked off. our suite scheduler runs our very slow testregularly, like a nightly test or a weekly test. and it schedules it and ensures thatit's kicked off. and our builders, which chris went over before,once they have placed a build into google
storage, they will talk to the proxy to kickoff a job. now pretty much our rpc proxy just takes thisrequest and translates it into a database entry, which says, "i want to run the bvton this build for this device" which could be like a samsung chromebox i3, for example.and so once this entry is in the database, our infrastructure knows that it needs torun this test. this is where our host scheduler comes in.so we have 2500 devices. they are all in different states at any time. they can either be ready.they can be running a test. they could be verifying for a new test. or bad state wouldbe repair failed where our automatic repair processes didn't work and this device shouldnot be used for testing.
so the host scheduler looks at the databaseand sees what tests need to be run. it will say that the bvt i suggested earlier wanteda samsung chromebox i3. so it will look for a chromebox i3 that's in the ready state andassign it to this job. at this point, the scheduler will now seethat this test is ready to run. it has a host. and it will actually go through the processof scheduling -- kicking off this job and monitoring it.now, scheduler is probably our most important part of the infrastructure. and it launchesthe test. it aborts jobs that might be hung because they all have a timeout. if a jobgets stuck for 24 hours, that means it's wasting resources and a device so it needs to abortjobs. it also manages all the different other
servers that are actually executing tests.so earlier when i was talking about autotest, we actually support two different types oftesting. one type is called server-side test. and these are tests that actually run theserver and manipulate a device. the most basic test would be a reboot test. it would -- thetest actually runs on our drone servers, and it would tell a device reboot. and it ensuresthat the device comes back up. we would do this, like, ten reboots and make sure thatthe device comes up every time. now, the other type of test that we will runis a client-side test. this is a test that actually executes on the device, but the droneis still responsible for kicking off that process and monitoring it and ensuring itdoesn't get stuck.
so the drones are actually the servers thatare kicking off and monitoring all of our tests.now, like chris was saying earlier, all our builds are done on our builders. but we needto store them somewhere. so when a build is completed, it's placed into google storageso that way our lab has access to it as well as our developers. google storage is justa big back-end that you can put a bunch of files, if you're not familiar with it.now, because of all this traffic and all the tests we are running, if we have 20 differentchromebooks trying to download the same image off of google storage, we will kill our lab'snetwork bandwidth. so for that, we developed these new serverscalled dev servers. they are essentially a
cache to google storage and will pull theimage into the lab topology and store it there for 24 hours.the other thing the dev servers do is they actually emulate our normal update omaha servers.so that way when the chromebooks are flashing and installing a new image, it's as if theyare actually going through the normal update process your chromebook would normally doin the field. so for my previous example, a builder kicksoff a test. it will actually -- the device will now download that build from the devserver which if it's not there will get it from google storage, and update before runningthe test. after the tests have ran, we actually uploadall the test result files back to google storage.
this way it's accessible to us and all thedevelopers who need to see logs should something fail. we actually save almost all our testresults for the last six months so if something goes wrong, we have something to compare against.lastly, we need to keep scaling. our lab keeps growing by exponentially with the number ofdevices. so for this, we have a concept of shards which allows us to continually justadd shards and let us scale without hitting our limits.so i just went over the topology of our lab and we had all these different servers, andthey all work together. if i was to write a test, i could write a test that interactswith the dev server, the drone, the scheduler, the database to get information. and it getsreally difficult for our external partners
to replicate this should we find a failureand they need to recreate to help fix the issue.so we came up with the idea of moblab. instead of having five servers, it's a single serverthat includes all those services and it replicates our lab. pretty much the reason we neededmoblab is because the time -- we're doing hardware bringup. and the time it takes togo back and forth between us and our partners in asia could take a long time.for example, a proto board is built. they send us the board. it takes seven days toship, has to get through customs. then we find an issue, and then we have to tell them.and then they do another run. and this just adds up.so it lets them do the testing and replicate
our lab on their end. and our testing is nowscaled out to our partners as well and allows us to ship devices faster, which is good forgoogle and good for our hardware partners. so what is moblab exactly? so essentiallywhat we did is we took the basic chromebox image and installed all the different serversand services we required to run our lab infrastructure. and from there, we added all the logic toinitialize and configure everything correctly, simplify it to make it as simple as possible.so this includes the apache web server with the web interface, the same database structurethat we use, python, the dev server code so they can download images from google storage.that's actually really important because when we post images now, they can actually downloadthem and test with the same exact images that
we produce.and that includes all the other services that i mentioned before.now, the great thing about it is it's actively developed by us. whenever we do updates tothe test framework, we can auto update and push this out to our partners. and it's alsocross-platform. so these are a couple basic examples of whatour different partners might want to do. oems and odms are the people who are making thenew laptops. they want to start -- running the bvt on a brand-new board would be greatway for them to verify how the hardware is doing. they do stress testing. they mightwant to do the power load test before they choose a battery, and they also want to makesure the firmware is correct.
and that's a c vendor. like, intel might wantto try different kernels or do regression testing against new kernel changes. generally,they make a reference board that they actually don't ship a chromebox but other partnerslike dell or hp will build a chromebox around. hardware component vendors, they want to selltheir different components to oems. they can't unless we approve it. so by giving a modelapp to say, like, sandisk, they can do ssd validation, send us the test results becausethey go back to google storage and we can see them and we can say whether this new partis valid to be used in chromebooks. and, lastly, bios vendors want to make surethat their bios works with our firmware. so the benefit of moblab for our partnersis it's really easy to set up. pretty much
they just take a chromebox, install the imageand everything should set up properly. it's a common test framework. the great thing aboutthis is should we find a failure, we can just tell them what test to run and they can reproduceit at their site. before if we tried to do it with our workstation versus their work station, different setups, different operating systems, theymight not be reproducible. the faster debug cycles now that we can easilydo these in reproduction is a great benefit to everybody involved.these partners also get all the tool benefits that we can have in autotest because theycan manage all the different devices they have as well as easily kick off the tests.like by said, it's autoupdated by us. the
really great thing is it's open source. ifthey have ideas or changes they want to do, they can create a cl and change it and uploadit for us to review and it become part of the moblab platform.this is an example of a lab setup that a partner might do. so the chromebox in the middle isthe actual model device. and we only have a few requirements to get it going. one isinternet access. because a lot of these factories are based in china, that might mean vpn accessso that way they can access google services. the reason they need to access to our servicesis, one, for google storage. that's where our builders will give them the images theycan test. and, two, they need access to the au server so should there be an update, theycan download the update and get the new infrastructure
that we have been building.next on the, side you can see we have a test subnet. pretty much we just tell them to addanother ethernet interface and then add a switch and you can hook up all your test devicesthere. and our system knows to test those devices on that network. and then from theirown corporate network, they can use their own laptop and kick off the test.now, chris was showing you guys early in the video the server board. we actually give theseto our partners so that way they can run our full suite of tests. that also is supportedwith the moblab setup. so should their devices also need to be repaired, they can automaticallyhave it happen. so beyond just chrome os, moblab has beenapplied to a number of other platforms. onhub
actually did all their testing by using moblabdevices. they created quick lab to add a couple of moblabs, and they were able to get going.brillo, which is going to be google's internet of things platform is going to be supportedby moblab. and android support is coming up as well.some additional features that moblab gives you: custom os image support. so this allowsour partners to actually do their own custom images. say, they want to change the kernelversion. they can actually create their own custom image with the new kernel and thenthey can actually run all our tests against it.custom test support. they can write their own tests. before they even submit the testsinto our tree, they can run the test and make
sure they work and validate it.private test repository support. so you can keep your test outside of our sources andstill run them. this is great because onhub is open source -- moblab is open source butonhub is not. so those tests are closed source. we actually have virtual machine support tomake it easier for people to get moblab set up. and we actually have a tool called mobmonitorwhich simplifies setup and let's people know when things go wrong.>>chris sosa: anyway, as simran and i said, chrome os is sort of based on chromium os,which is a fully open source project. if you have any more questions, there's our discusslist. feel free to send an email. all of our documentation is online and our fork of autotests is actually also publicly available
so feel free to check it out.any questions? [ applause ]>>yvette nameth: thanks, guys. what type of bugs do real device tests find that emulatortests don't? >>simran basi: wifi, bluetooth, those arethings that are hard to test. >>chris sosa: a lot of kernel bugs actually.a lot of kernel bugs. there's a lot of stuff when you have the emulator, like the touchpadis emulated. so any kind of touchpad regressions you have, you won't have it.it's part of a -- you can theoretically fix this problem by reimplementing all the low-leveldevice driver stuff to also work in the emulator. but, basically, we usually don't because thatrequires twice the amount of development work.
so anything, basically, low level is not caughtby the emulators. >>yvette nameth: greater than 100 git repos?how are those synchronized? do they depend on each other at head? are branches requiredperhaps for different hardware or chromium versions?>>chris sosa: so our submit queue creates a snapshot of the repo manifest. so we userepo which is, basically, a way of working with a number of other git projects to sortof collectively have one giant source tree. android uses the same thing. if you createa snapshot of that, it flags all the hashes down so you, basically, have a fixed versionof trunk. so that's what our commit queue uses. for any releases, we always generate-- we always generate a snapshot that we actually
distribute to all of our builders buildingthe same canary at the same time. so that's how we get around that.>>yvette nameth: how many bugs appear only in physical devices versus passing in emulators?>>chris sosa: a lot of them. [ laughter ]so the thing is, emulators -- if you think about it, we have 50 variations of chromebooksand chromeboxs. emulators cut down that amount of variation you will see in tests quite abit. they are really good. if you have a regression in the browser, say, like oh emulators willcatch that almost 100% of the time. if you have anything that's like a low-level servicetalking to any other kind of low-level service, emulators aren't going to catch it.so the thing is, emulators usually prevent
those kind of changes ever from reaching thehardware test because we kind of have a tiered system in testing.so, like, it's kind of hard to say because i would have to look at only emulator changesversus only physical device changes. but they are both very valuable. that's the most ican say without spending a long time looking at all the logs.>>yvette nameth: awesome test lab. but 2400 machines, wow. any thoughts or plans to streamlinethe number of machines for tests? >>simran bas: it's just going to keep growing?>>yvette nameth: so the opposite. >>simran bas: i think we have a set numberthat we require for every new device. >>chris sosa: we have a set number. and alot is sort of up to the software -- i guess
we are the same team. but a lot of it is basicallyup to how we choose to develop. because we have a unique image per device, the softwareis unique per device, because we're trying to keep the image as small as possible tomake -- sort of keep the speed tenet in chrome. it's really hard. like, if you were to, ofcourse, have a lot more generalized drivers that applied equally, you might be able tohave coverage without having all the devices. and we're actually thinking about potentiallyreducing the number of devices we need for follower devices. so usually with kind ofdevice development, you have a reference board that has most of the drivers specced out.but then individual partners might, say, change the -- they might have the exact same thingfrom another partner, but it's an i5 instead
of an i3. in those cases, we can use a lotfewer devices. in terms of, like, the fact that we reallydon't want to brick anyone in the field, we do want a small amount of coverage at least,but we can definitely get away with fewer. >>yvette nameth: have you considered somethingelse to manage the test lab instead of autotest, for example, beaker-project.org, used by redhat and fedora? >>simran bas: we've looked at other frameworks,but we're kind of so ingrained to autotest that we're kind of stuck with it.>>chris sosa: yeah, we're pretty open-minded. but a lot of our -- especially since we'reboth part of a continuous integration team, a lot of our work has been scaling out autotest.and a lot of the services we've built are
in the fork of autotest we use. and so we,like -- for example, simran talked about sharding, which is basically -- one of the big problemswith autotest is that it schedules everything on one thread. and, like, when you have 2400devices running, you know, hundreds of thousands of tests, you know, one thread can only doso much. and so we've actually distributed and sharded out the work of, like, matchinghosts to tests, which we probably are looking at to upstream, but we haven't yet. so we'redefinitely open to it. and autotest, it's a great framework. it's not perfect. so we'realways open to look at other things. >>yvette nameth: how do you do battery tests?>>simran bas: so we have these big power outlets that actually are running linux. and each-- our lab topology is set up so each is plugged
into the correct outlet. a service side canrequest, as i mentioned before, turn this device on this rack, this rail, off, and thenthe power will actually be disconnected. and at the end of the test, we do clean upand make sure that power is restored. but pretty much that's how we can kill the acremotely and do the battery testing. >>yvette nameth: how complex do the powertests or battery tests get? for example, is the battery duration watched while watchingvideos, playing html5, or javascript-intensive games. and do you ever open bugs against chromiumfor decreasing battery life? >>simran bas: i'm not too sure of the specificsof the power test, to be honest. >>chris sosa: yeah, so our software modelis to have -- we provide the infrastructure.
individual developers on the subteams do allof the actual testing. so the power team would be able to answer that question a lot better.>>yvette nameth: well, we are now out of time. so thank you very much, guys.>>chris sosa: thank you. [ applause ]
No comments :
Post a Comment