Search Results

Search found 23653 results on 947 pages for 'oracle workforce management'.

Page 367/947 | < Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >

  • How do I get security updates for restricted/partner packages?

    - by laramichaels
    I want to perform just security updates on Ubuntu 12.04 LTS, keeping the rest of the system unchanged. I need to do this from the command line, no the GUI update manager. I have implemented the solution described here, which seems to work great for this purpose. I merely substituted 'precise' for 'lucid' given that I am on 12.04. My question is: by using apt pinning as described in that answer, will I still receive security updates for packages distributed through the "other" repositories - partner, restricted, multiverse, etc? Or will it only get me updates for the packages in the "core" distribution? thanks! ~l

    Read the article

  • Tuxedo Load Balancing

    - by Todd Little
    A question I often receive is how does Tuxedo perform load balancing.  This is often asked by customers that see an imbalance in the number of requests handled by servers offering a specific service. First of all let me say that Tuxedo really does load or request optimization instead of load balancing.  What I mean by that is that Tuxedo doesn't attempt to ensure that all servers offering a specific service get the same number of requests, but instead attempts to ensure that requests are processed in the least amount of time.   Simple round robin "load balancing" can be employed to ensure that all servers for a particular service are given the same number of requests.  But the question I ask is, "to what benefit"?  Instead Tuxedo scans the queues (which may or may not correspond to servers based upon SSSQ - Single Server Single Queue or MSSQ - Multiple Server Single Queue) to determine on which queue a request should be placed.  The scan is always performed in the same order and during the scan if a queue is empty the request is immediately placed on that queue and request routing is done.  However, should all the queues be busy, meaning that requests are currently being processed, Tuxedo chooses the queue with the least amount of "work" queued to it where work is the sum of all the requests queued weighted by their "load" value as defined in the UBBCONFIG file.  What this means is that under light loads, only the first few queues (servers) process all the requests as an empty queue is often found before reaching the end of the scan.  Thus the first few servers in the queue handle most of the requests.  While this sounds non-optimal, in fact it capitalizes on the underlying operating systems and hardware behavior to produce the best possible performance.  Round Robin scheduling would spread the requests across all the available servers and thus require all of them to be in memory, and likely not share much in the way of hardware or memory caches.  Tuxedo's system maximizes the various caches and thus optimizes overall performance.  Hopefully this makes sense and now explains why you may see a few servers handling most of the requests.  Under heavy load, meaning enough load to keep all servers that can handle a request busy, you should see a relatively equal number of requests processed.  Next post I'll try and cover how this applies to servers in a clustered (MP) environment because the load balancing there is a little more complicated. Regards,Todd LittleOracle Tuxedo Chief Architect

    Read the article

  • Need a Holistic view of your Concurrent Processing?

    - by cwarticki
    Need a Holistic view of your Concurrent Processing? Choose CP AnalyzerGo to Doc 1411723.1 for more details and script download. The Concurrent Processing Analyzer is a Self-Service Health-Check script which reviews the overall Concurrent Processing Footprint, analyzes the current configurations and settings for the environment providing feedback and recommendations on Best Practices. This is a non-invasive script which provides recommended actions to be performed on the instance it was run on.  For production instances, always apply any changes to a recent clone to ensure an expected outcome. E-Business Applications Concurrent Processing Analyzer Overview E-Business Applications Concurrent Request Analysis E-Business Applications Concurrent Manager Analysis Identifies Concurrent System Setup and configurations Identifies and recommends Concurrent Best Practices Easy to add Tool for regular Concurrent Maintenance Execute Analysis anytime to compare trending from past outputs Feedback welcome!

    Read the article

  • Entity Framework 4.3.1 Code based Migrations and Connector/Net 6.6

    - by GABMARTINEZ
     Code-based migrations is a new feature as part of the Connector/Net support for Entity Framework 4.3.1. In this tutorial we'll see how we can use it so we can keep track of the changes done to our database creating a new application using the code first approach. If you don't have a clear idea about how code first works we highly recommend you to check this subject before going further with this tutorial. Creating our Model and Database with Code First  From VS 2010  1. Create a new console application 2.  Add the latest Entity Framework official package using Package Manager Console (Tools Menu, then Library Package Manager -> Package Manager Console). In the Package Manager Console we have to type  Install-Package EntityFramework This will add the latest version of this library.  We will also need to make some changes to your config file. A <configSections> was added which contains the version you have from EntityFramework.  An <entityFramework> section was also added where you can set up some initialization. This section is optional and by default is generated to use SQL Express. Since we don't need it for now (we'll see more about it below) let's leave this section empty as shown below. 3. Create a new Model with a simple entity. 4. Enable Migrations to generate the our Configuration class. In the Package Manager Console we have to type  Enable-Migrations; This will make some changes in our application. It will create a new folder called Migrations where all the migrations representing the changes we do to our model.  It will also create a Configuration class that we'll be using to initialize our SQL Generator and some other values like if we want to enable Automatic Migrations.  You can see that it already has the name of our DbContext. You can also create you Configuration class manually. 5. Specify our Model Provider. We need to specify in our Class Configuration that we'll be using MySQLClient since this is not part of the generated code. Also please make sure you have added the MySql.Data and the MySql.Data.Entity references to your project. using MySql.Data.Entity;   // Add the MySQL.Data.Entity namespace public Configuration() { this.AutomaticMigrationsEnabled = false; SetSqlGenerator("MySql.Data.MySqlClient", new MySql.Data.Entity.MySqlMigrationSqlGenerator());    // This will add our MySQLClient as SQL Generator } 6. Add our Data Provider and set up our connection string <connectionStrings> <add name="PersonalContext" connectionString="server=localhost;User Id=root;database=Personal;" providerName="MySql.Data.MySqlClient" /> </connectionStrings> <system.data> <DbProviderFactories> <remove invariant="MySql.Data.MySqlClient" /> <add name="MySQL Data Provider" invariant="MySql.Data.MySqlClient" description=".Net Framework Data Provider for MySQL" type="MySql.Data.MySqlClient.MySqlClientFactory, MySql.Data, Version=6.6.2.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d" /> </DbProviderFactories> </system.data> * The version recommended to use of Connector/Net is 6.6.2 or earlier. At this point we can create our database and then start working with Migrations. So let's do some data access so our database get's created. You can run your application and you'll get your database Personal as specified in our config file. Add our first migration Migrations are a great resource as we can have a record for all the changes done and will generate the MySQL statements required to apply these changes to the database. Let's add a new property to our Person class public string Email { get; set; } If you try to run your application it will throw an exception saying  The model backing the 'PersonelContext' context has changed since the database was created. Consider using Code First Migrations to update the database (http://go.microsoft.com/fwlink/?LinkId=238269). So as suggested let's add our first migration for this change. In the Package Manager Console let's type Add-Migration AddEmailColumn Now we have the corresponding class which generate the necessary operations to update our database. namespace MigrationsFromScratch.Migrations { using System.Data.Entity.Migrations; public partial class AddEmailColumn : DbMigration { public override void Up(){ AddColumn("People", "Email", c => c.String(unicode: false)); } public override void Down() { DropColumn("People", "Email"); } } } In the Package Manager Console let's type Update-Database Now you can check your database to see all changes were succesfully applied. Now let's add a second change and generate our second migration public class Person   {       [Key]       public int PersonId { get; set;}       public string Name { get; set; }       public string Address {get; set;}       public string Email { get; set; }       public List<Skill> Skills { get; set; }   }   public class Skill   {     [Key]     public int SkillId { get; set; }     public string Description { get; set; }   }   public class PersonelContext : DbContext   {     public DbSet<Person> Persons { get; set; }     public DbSet<Skill> Skills { get; set; }   } If you would like to customize any part of this code you can do that at this step. You can see there is the up method which can update your database and the down that can revert the changes done. If you customize any code you should make sure to customize in both methods. Now let's apply this change. Update-database -verbose I added the verbose flag so you can see all the SQL generated statements to be run. Downgrading changes So far we have always upgraded to the latest migration, but there may be times when you want downgrade to a specific migration. Let's say we want to return to the status we have before our last migration. We can use the -TargetMigration option to specify the migration we'd like to return. Also you can use the -verbose flag. If you like to go  back to the Initial state you can do: Update-Database -TargetMigration:$InitialDatabase  or equivalent: Update-Database -TargetMigration:0  Migrations doesn't allow by default a migration that would ocurr in a data loss. One case when you can got this message is for example in a DropColumn operation. You can override this configuration by setting AutomaticMigrationDataLossAllowed to true in the configuration class. Also you can set your Database Initializer in case you want that these Migrations can be applied automatically and you don't have to go all the way through creating a migration and updating later the changes. Let's see how. Database Initialization by Code We can specify an initialization strategy by using Database.SetInitializer (http://msdn.microsoft.com/en-us/library/gg679461(v=vs.103)). One of the strategies that I found very useful when you are at a development stage (I mean not for production) is the MigrateDatabaseToLatestVersion. This strategy will make all the necessary migrations each time there is a change in our model that needs a database replication, this also implies that we have to enable AutomaticMigrationsEnabled flag in our Configuration class. public Configuration()         {             AutomaticMigrationsEnabled = true;             AutomaticMigrationDataLossAllowed = true;             SetSqlGenerator("MySql.Data.MySqlClient", new MySql.Data.Entity.MySqlMigrationSqlGenerator());    // This will add our MySQLClient as SQL Generator          } In the new EntityFramework section of your Config file we can set this at a context level basis.  The syntax is as follows: <contexts> <context type="Custom DbContext name, Assembly name"> <databaseInitializer type="System.Data.Entity.MigrateDatabaseToLatestVersion`2[[ Custom DbContext name, Assembly name],  [Configuration class name, Assembly name]],  EntityFramework" /> </context> </contexts> In our example this would be: The syntax is kind of odd but very convenient. This way all changes will always be applied when we do any data access in our application. There are a lot of new things to explore in EF 4.3.1 and Migrations so we'll continue writing some more posts about it. Please let us know if you have any questions or comments, also please check our forums here where we keep answering questions in general for the community.  Hope you found this information useful. Happy MySQL/.Net Coding! 

    Read the article

  • Agile PLM 9.3 Service Pack 2 (SP2 or 9.3.0.2) is released along with AUT 1.6.2.0 and AutoVue 20 for

    - by Shane Goodwin
    Oracle released Agile PLM 9.3 SP2 on June 14 and the Agile installer for AutoVue 20 for Agile PLM on April 30. Also available are the new versions of AUT and Averify - 1.6.3 for both tools. 9.3 SP2 is a combined English and NLS release for use on any version of 9.3.0. SP2 contains many bug fixes and rolls up several Hot Fixes - please review the Readme for all the details. In addition, this release also addresses some scalability issues when working with very large Exports and Reports. When exporting very large BOMs, the export module will now release objects more efficiently to reduce the amount of memory consumed on the Application Server. Adminstrators can also control the maximum row limits for Users verses system processes, like ACS. Several out of the box BOM reports have also been changed to use a new row limit option. The combination of all these changes will provide more stability on the application server for customers managing very large datasets. 9.3 SP2 also adds support for Oracle Database 11gR2 for Windows, Oracle Internet Directory (OID) and Oracle Access Manager (OAM). Please note that currently the Variant Patch is not intended to be released for SP2. Customers running the Variant Patch should remain on 9.3.0.0 or 9.3.0.1. Back in April, we also released the AutoVue 20 for Agile PLM installer. AutoVue 20 has many new features which will help Agile PLM customers. Large multi-page Word documents and 2D CAD documents will open more quickly to the first page or first rendition. Memory usage is less when working with 3D Models. There are many new formats supported for MCAD, 2D Cad, and EDA. AutoVue 20 is immediately available for Windows and Linux platforms. The new software can be found in Edelivery or Metalink / Oracle Support: - AutoVue 20 for Agile PLM is on E-Delivery with part number B58963-01 - Oracle Agile PLM 9.3 Service Pack 2 (9.3.0.2) My Oracle Support Patch ID 9782736 - AVERIFY 1.6.3 My Oracle Support Patch ID 9791892 - AUT 1.6.3 My Oracle Support Patch ID 9791908 - Agile PLM 9.3 SP2 Documentation is available on the OTN Agile Documentation Page

    Read the article

  • Using Exception Handler in an ADF Task Flow

    - by anmprs
    Problem Statement: Exception thrown in a task flow gets wrapped in an exception that gives an unintelligible error message to the user. Figure 1 Solution 1. Over-writing the error message with a user-friendly error message. Figure 2 Steps to code 1. Generating an exception: Write a method that throws an exception and drop it in the task flow.2. Adding an Exception Handler: Write a method (example below) to overwrite the Error in the bean or data control and drop the method in the task flow. Figure 3 This method is marked as the Exception Handler by Right-Click on method > Mark Activity> Exception Handler or by the button that is displayed in this screenshot Figure 4 The Final task flow should look like this. This will overwrite the exception with the error message in figure 2. Note: There is no need for a control flow between the two method calls (as shown below). Figure 5 Solution 2: Re-Routing the task flow to display an error page Figure 6 Steps to code 1. This is the same as step 1 of solution 1.2. Adding an Exception Handler: The Exception handler is not always a method; in this case it is implemented on a task flow return.  The task flow looks like this. Figure 7 In the figure below you will notice that the task flow return points to a control flow ‘error’ in the calling task flow. Figure 8 This control flow in turn goes to a view ‘error.jsff’ which contains the error message that one wishes to display.  This can be seen in the figure below. (‘withErrorHandling’ is a  call to the task flow in figure 7) Figure 9

    Read the article

  • Understanding hand written lexers

    - by Cole Johnson
    I am going to make a compiler for C (C99; I own the standards PDF), written in C (go figure) and looking up on how compilers work on Wikipedia has told me a lot. However, after reading up on lexers has confused me. The Wikipedia page states that: the GNU Compiler Collection (gcc) uses hand-written lexers I have tried googling what a hand written lexer and have come up with nothing except for "making a flowchart that describes how it should function", however, isn't that how all software development should be done? So my question is: "What is a hand written lexer?"

    Read the article

  • Can't upgrade Firefox

    - by Deb Vorndran
    When I first got Ubuntu and got used to it, I decided to install lots of packages, and I overwhelmed the update manager. I had to interrupt it, because the download time was crazy long. That was 2 years ago. I haven't been able to upgrade anything since then. But now, some things I need will not work in my 2-year-old version of Firefox. I want to cancel all of the package installations that I started, and was not able to finish, and I need to upgrade Firefox. What should I do?

    Read the article

  • 13.10 hangs on waking from suspend except when suspended from console

    - by Pavel
    I know waking from suspend is an issue, but this looks like a separate bug. When I suspend 13.10 on HP Pavillion dv6 (AMD 6770M/fglrx 13.10.10) from x, it suspends normally but freezes when waking. I get a black screen with a frozen cursor. But when I suspend from console with sudo pm-suspend, it wakes normally, and I can then get back my x with ctrl-alt-f7. If I suspend by closing lid under x, also freezes when waking up. If I suspend by closing lid under console, it wakes up into the x (?) login, then into a clean session.

    Read the article

  • Content Challenge: You Can Only Get it Here

    - by Mike Stiles
    Part of the content conundrum for brands is figuring out what kind of content customers would find cool, desirable, and relevant. The mere fact many brands have no idea what this content might be is, in itself, pretty alarming. You’d have to have a pretty thorough lack of involvement with and understanding of your customers to not know what they might like. But despite what should be a great awakening in which consumers are using every technology and trick in the book to shield themselves from ads and commercials, brand self-obsession continues as marketers concentrate on their message, their campaign, what they want to say, and what they want social users to do. When individuals conduct themselves in that same fashion on Facebook and Twitter, it gets tiresome and starts losing value pretty quickly. Their posts eventually get hidden. Conversely, friends who post things that consistently entertain or inform, with little self-marketing desperation involved, win the coveted “show all updates” setting. Of course brands are going to use social to market. It’s pretty much the point of having social in the marketing mix. And yes, people who follow a brand’s Twitter account or “Like” a brand’s Facebook Page implicitly state they want to know what’s going on with that brand’s products and services. But if you have a Facebook friend that assumes you want every one of her posts to be about what wine she likes (Mitsubishi’s current campaign is even based around weeding out pretentious Facebook friends, then running them over), then you know how it must feel for your fans and followers to get a sales pitch for your crackers or whatever you’re selling every single time. Is there such a thing as content that doesn’t sell but that still advances the brand and makes the consumer more involved and valuable? Of course. And perhaps there are no better companies than enterprise brands to do it. Enterprise organizations are large enough to go beyond a product and engage readers/viewers at higher, broader levels…communicating expertise across entire sectors, subjects and industries. You’re going from pitchman to news source, and getting full credit for it as the presenter. A recent GigaOM article pointed out the success a San Francisco-based startup called Crunchyroll is having. Their niche (and they proudly admit it’s a niche) is providing Japanese anime, Korean drama and Asian live action content to countries that can’t get it any other way via licensing deals. Shows are available in HD and on the same day they air in the host country. Crunchyroll not only gets 8 million viewers a month, they have 100,000 paying subscribers at $7-12/month. Got a point, Mike? I do happen to have one. Crunchyroll illustrates the content opportunity enterprise companies have…which is to determine your “area,” the interest graph of your customers, then provide content that speaks to and satisfies those interests that can’t be found anywhere else. At least not in the same style, or of the same quality, or with the same authority. Do what no one else is doing. Provide what no one else is providing in your sector. If underserved users are willing to pay monthly for access to awkwardly moving cartoon dragons, imagine the audience you could attract with free, useful, non-sales content in your customers’ area of interest. It’s an audience you’ll want in place when the time does come to put out that marketing message. A content challenge is better than a content conundrum any day.

    Read the article

  • Netbook performs hard shutdown without warning on low battery power

    - by Steve Kroon
    My Asus EEE netbook performs a hard shutdown when it reaches low battery power, without giving any warning - i.e. the power just goes off, without any shutdown process. I can't find anything in the syslog, and no error messages are printed before it happens. I've had this problem on previous (K)Ubuntu versions, and hoped updating to Ubuntu Precise would help resolve the issue, but it hasn't. The option in the Power application for "when power is critically low" is currently blank - the only options are a (grayed-out) hibernate and "Power off". I have re-installed indicator-power to no effect. The time remaining reported by acpi is unstable, as is the time remaining reported by gnome-power-statistics. (For example, running acpi twice in succession, I got 2h16min, and then 3h21min remaining. These sorts of jumps in the remaining time are also in the gnome-power-statistics graphs.) It might be possible to write a script to give me advance warning (as per @RanRag's comment below), but I would prefer to isolate why I don't get a critical battery notification from the system before this happens, so that I can take action as appropriate (suspend/shutdown/plug in power) when I get a notification. Some additional information on the battery: kroon@minia:~$ upower -i /org/freedesktop/UPower/devices/battery_BAT0 native-path: /sys/devices/LNXSYSTM:00/device:00/PNP0A08:00/PNP0C0A:00/power_supply/BAT0 vendor: ASUS model: 1005P power supply: yes updated: Fri Aug 17 07:31:23 2012 (9 seconds ago) has history: yes has statistics: yes battery present: yes rechargeable: yes state: charging energy: 33.966 Wh energy-empty: 0 Wh energy-full: 34.9272 Wh energy-full-design: 47.52 Wh energy-rate: 3.7692 W voltage: 12.61 V time to full: 15.3 minutes percentage: 97.248% capacity: 73.5% technology: lithium-ion History (charge): 1345181483 97.248 charging 1345181453 97.155 charging 1345181423 97.062 charging 1345181393 96.970 charging History (rate): 1345181483 3.769 charging 1345181453 3.899 charging 1345181423 4.061 charging 1345181393 4.201 charging kroon@minia:~$ cat /proc/acpi/battery/BAT0/state present: yes capacity state: ok charging state: charging present rate: 332 mA remaining capacity: 3149 mAh present voltage: 12612 mV kroon@minia:~$ cat /proc/acpi/battery/BAT0/info present: yes design capacity: 4400 mAh last full capacity: 3209 mAh battery technology: rechargeable design voltage: 10800 mV design capacity warning: 10 mAh design capacity low: 5 mAh cycle count: 0 capacity granularity 1: 44 mAh capacity granularity 2: 44 mAh model number: 1005P serial number: battery type: LION OEM info: ASUS

    Read the article

  • Is rotating the lead developer a good or bad idea?

    - by Renesis
    I work on a team that has been flat organizationally since it's creation several months ago. My manager is non-technical and this means that our whole team is responsible for decision-making. My manager is beginning to realize that there are several benefits to having a lead developer, both for his sake (a single point of contact and single responsible party for tasks) and ours (dispute resolution, organized technical guidance, etc.). Because the team has been flat, one concern is that picking one lead developer may discourage the others. A non-developer suggested to my manager that rotating the lead developer is a possible way to avoid this issue. One developer would be lead one month, another the next, and so on. Is this a good idea? Why or why not? Keep in mind that this means all developers — All developers are good, but not necessarily equally suited to leadership. And if it is not, suppose I am likely the best candidate for lead developer — how do I recommend that we avoid this approach without looking like it's merely for selfish reasons? (In other words, the team is small enough that anyone recommending a single leader is likely to appear to be recommending themselves — especially those who have been part of the team longer.)

    Read the article

  • apt-get update bzip2 errors

    - by Tejas Kale
    I installed Ubuntu 11.10 today on my Lenovo w500. After that when i tried running sudo apt-get update This is the error i am getting. Get:117 http://ftp.jaist.ac.jp oneiric-security/universe TranslationIndex [73 B] 99% [48 Sources bzip2 0 B] [22 Sources bzip2 5,294 kB] 1,983 kB/s 0s bzip2: Compressed file ends unexpectedly; perhaps it is corrupted? *Possible* reason follows. bzip2: Inappropriate ioctl for device Input file = (stdin), output file = (stdout) It is possible that the compressed file(s) have become corrupted. You can use the -tvv option to test integrity of such files. You can use the `bzip2recover' program to attempt to recover data from undamaged sections of corrupted files. I found the following similar question : Errors while updating Ubuntu 11.10 , But the solutions mentioned ( changing the download server, running apt-get clean, apt-get autoclean) and have also tried removing the /var/cache/apt/archives/lists direcotry. As a result of this, I am unable to install any new packages.

    Read the article

  • Change power button to 'Ask' in Xubuntu 13.10

    - by Gully.Moy
    I have recently installed Xubuntu 13.10 on my Vaio vpcea making me a Linux beginner. The problem is that laptop's power button is right on the edge of the bezel making it far too easy to press accidentally, in my opinion a design fault by Sony. At present, when I press the power button it shuts down strait away and as you can imagine, when I'm accidentally pressing it all the time it gets very annoying! So I planned to change it to ask what I would like to do when I press it or at least ask if I'm sure. So I went through the xfce GUI options "Settings Manager" - "Power Manager" to the field "When power button is pressed", but it was already set to "Ask". So I did some digging and found a thread telling me to navigate to /etc/xdg/xfce4/xfconf/xfce-perchannel-xml/xfce4-power-manager.xml where it said to find power-button-action and check that value="3". It already did. So I looked some more and found this thread which focuses on acpi scripts. I tried solution 1 & 2 using sudoedit to change the files accordingly (I have made executable bash shell scripts already so I think I followed them correctly), but still no difference. I also found this thread which instructed me to edit /etc/systemd/logind.conf so that HandlePowerKey=ignore. Still no luck. I even tried my own approach to completely disable /etc/acpi/powerbtn.sh by renaming it powerbtn.sh.bak hoping for at least no response from the power button... and I have done many reboots in between... but still it shuts down! I have also read that some people have the file /etc/acpi/events/power_button, but I do not. So does anyone have any other ideas? What else could be executing the shutdown sequence Is there something I'm missing? I haven't undone any of these actions so every one of the above files is currently edited on my computer, with the exception that "Solution 2" automatically undone "Solution 1" above. Thanks guys.

    Read the article

  • ACPI _OSC control

    - by nmc
    I've noticed an error from dmesg output that ubuntu canot enable ASPM: [ 0.192722] pci0000:00: ACPI _OSC support notification failed, disabling PCIe ASPM [ 0.192728] pci0000:00: Unable to request _OSC control (_OSC support mask: 0x08) While I know that this is not a bug, I'd like to ask if there is a way to correct it and enable ASPM, the reason being that it should increase battery life (powertop shows that PCIe is always 100% on). Edit: I am running on an ASUS eee pc 1015px with the latest bios version (1401)

    Read the article

  • Why Ultra-Low Power Computing Will Change Everything

    - by Tori Wieldt
    The ARM TechCon keynote "Why Ultra-Low Power Computing Will Change Everything" was anything but low-powered. The speaker, Dr. Johnathan Koomey, knows his subject: he is a Consulting Professor at Stanford University, worked for more than two decades at Lawrence Berkeley National Laboratory, and has been a visiting professor at Stanford University, Yale University, and UC Berkeley's Energy and Resources Group. His current focus is creating a standard (computations per kilowatt hour) and measuring computer energy consumption over time. The trends are impressive: energy consumption has halved every 1.5 years for the last 60 years. Battery life has made roughly a 10x improvement each decade since 1960. It's these improvements that have made laptops and cell phones possible. What does the future hold? Dr. Koomey said that in the past, the race by chip manufacturers was to create the fastest computer, but the priorities have now changed. New computers are tiny, smart, connected and cheap. "You can't underestimate the importance of a shift in industry focus from raw performance to power efficiency for mobile devices," he said. There is also a confluence of trends in computing, communications, sensors, and controls. The challenge is how to reduce the power requirements for these tiny devices. Alternate sources of power that are being explored are light, heat, motion, and even blood sugar. The University of Michigan has produced a miniature sensor that harnesses solar energy and could last for years without needing to be replaced. Also, the University of Washington has created a sensor that scavenges power from existing radio and TV signals.Specific devices designed for a purpose are much more efficient than general purpose computers. With all these sensors, instead of big data, developers should focus on nano-data, personalized information that will adjust the lights in a room, a machine, a variable sign, etc.Dr. Koomey showed some examples:The Proteus Digital Health Feedback System, an ingestible sensor that transmits when a patient has taken their medicine and is powered by their stomach juices. (Gives "powered by you" a whole new meaning!) Streetline Parking Systems, that provide real-time data about available parking spaces. The information can be sent to your phone or update parking signs around the city to point to areas with available spaces. Less driving around looking for parking spaces!The BigBelly trash system that uses solar power, compacts trash, and sends a text message when it is full. This dramatically reduces the number of times a truck has to come to pick up trash, freeing up resources and slashing fuel costs. This is a classic example of the efficiency of moving "bits not atoms." But researchers are approaching the physical limits of sensors, Dr. Kommey explained. With the current rate of technology improvement, they'll reach the three-atom transistor by 2041. Once they hit that wall, it will force a revolution they way we do computing. But wait, researchers at Purdue University and the University of New South Wales are both working on a reliable one-atom transistors! Other researchers are working on "approximate computing" that will reduce computing requirements drastically. So it's unclear where the wall actually is. In the meantime, as Dr. Koomey promised, ultra-low power computing will change everything.

    Read the article

  • Is there a difference between installing an application via Ubuntu Software Center or a terminal?

    - by gabriel
    I would like to ask a very basic question but I have never thought about it before. Well, when someone installs an application from terminal, he has to add the repository first, right? On the other side, when someone installs an application from the Ubuntu Software Center, is the repository then added automatically? I am asking those questions to figure out this: When I run update and then upgrade, will this application be upgraded or not? Is the result same in two options?

    Read the article

  • Choosing between two programmers: experience vs. passion

    - by Duke
    I am in a position where I have to hire a programmer and have the option of 2 candidates, the first has experience but he doesn't have a passion for coding and he says so while the second doesn't have the experience but he has the passion, he did well in the interview and is certified. We have the resources to train someone, but I really don't want to blow this process and hire someone who will be disappointing. Can anyone help me as to how to approach this situation?

    Read the article

  • Delta-update Firefox Aurora package from PPA

    - by ignite
    I am using Firefox Aurora in my Ubuntu 12.04 which I have installed via its ppa (ppa:ubuntu-mozilla-daily/firefox-aurora). As expected of Aurora, I get an update usually after 2-3 days. I have Firefox Aurora installed in Windows too. There also I get updates in 2-3 days but size of update is usually 4-5 MB, while in Ubuntu it's always around 20 MB. What is the reason for this difference? Is there any way by which I can download and install only the changes and not the entire Aurora again and again?

    Read the article

  • Why do business analysts and project managers get higher salaries than programmers? [closed]

    - by jpartogi
    We have to admit that programming is much more difficult than creating documentation or even creating Gantt chart and asking progress to programmers. So for us that are naives, knowing that programming is generally more difficult, why do business analysts and project managers get higher salary than programmers? What is it that makes their job a high paying job when even at most times programmers are the ones that go home late? UPDATE Excuse my ignorance, from some of the response it seems that the reason why BAs and PMs gets higher salary because they are the ones that usually responsible for the mess programmers make. But at the end of the day, it is programmers that get their hands dirty to fix the mess and work harder. So it still does not make sense.

    Read the article

  • Solaris 11 Live CD alapú telepítés

    - by AndrasF
    Az elozo részben megigért két telepítési eljárás helyett kénytelen vagyok ebben a bejegyzésben kizárólag a Live CD-s változattal foglalkozni. Korábban nem gondoltam, hogy ennek bemutatása is több, mint 50 képernyo kimenetet igényel, ezért változtatnom kellett a korábbi tervezeten. A Solaris 11 Live CD-s telepítés elsosorban az asztali (desktop) felhasználók igényeit veszi figyelembe és kizárólag x86-os architektúrájú gépeken támogatott (annak ellenére, hogy SPARC-os rendszerek is rendelkeznek grafikus kártyával - pl. T4-1).A folyamat két részre bontható: eloször a vendéggép kerül kialakítása VirtualBox környezetben, majd ezt követi a Solaris 11-es telepítése virtuális gépre. HCL és segédprogramok (DDT, DDU) Mielott telepíteni szeretnénk a Solaris operációs rendszert, célszeru tájékozódni fizikai rendszerünk támogatottságáról. Erre jól használható a már említett hardver kompatibilitási (HCL) lista, vagy az alábbi két segédprogram: Device Detection Tool Device Driver Utility Mindkét alkalmazás képes rendszerünk hardver komponenseit feltérképezni és ellenorizni azok meghajtóprogram (driver) ellátottságát. Eltérés köztük abban nyilvánul meg, hogy míg a DDT futtatásához Java szükséges, addig a DDU Solarist igényel. Ez utóbbiról a telepítés során röviden szó fog esni. Telepíto készletek letöltési helye Hálózati installációtól eltekintve (*) telepítokészletre van szükségünk, mely az alábbi oldalról töltheto le. Célszeru letöltenünk mindhárom állományt és a csomagokat tartalmazó ún. repository médiát (a következo felsorolás utolsó eleme) is: sol-11-1111-live-x86.iso sol-11-1111-text-x86.iso sol-11-1111-ai-x86.iso sol-11-1111-repo-full.iso Az elso három változat indítható USB formátumban is rendelkezésre áll - ekkor iso végzodés helyett usb található a fájlnevek végén. Rövid utalást az egyes készletek feladatáról az elozo blog bejegyzés tartalmaz (link). Amennyiben SPARC architektúrájú rendszerre szeretnénk a telepítést végezni, 'x86' helyett a 'sparc' szöveget tartalmazó állományokra lesz szükség. (*) - arra is lehetoség van, hogy AI készletrol történo indítás segítségével végezzük a hálózaton keresztül történo telepítést. Ez akkor fontos, ha célgépünkön nincs PXE (Preboot Execution Environment) boot támogatás. VirtualBox konfigurálás Külön fizikai eszköz felhasználása nélkül virtuális környezetben is használható a Solaris 11, mint vendéggép. A VirtualBox használatával erre kényelmes lehetoség kínálkozik. Gazdagépünknek (Windows, Unix, Linux) megfelelo telepíto program, vagy programcsomag (jelenleg a 4.1.16-os verzió a legfrissebb változat) és az installációt is taglaló felhasználói kézikönyv letöltheto a termék oldaláról. A sikeres telepítést követoen az alábbi lépések során jutunk el az új virtuális gép kialakulásáig: 1. A VBox indítása után a központi ablak megmutatja a már létezo virtuális gépeinket (Sol11demo, Sol11u1b07, Sol11.1B16, Sun_ZFS_Storage_7000) és az aktuálisan kiválasztott egyed (Sol11demo) fobb jellemzoit (megnevezés, memória mérete, virtuális tároló eszközök listája...stb.) 2. A New gombra kattintva elindul a virtuális gépet létrehozó segéd (wizard) 3. Ezt követoen nevet kell adnunk a vendéggépnek és ki kell választanunk az operációs rendszer típusát (beszédes név használata esetén a VirtualBox képes az operációs rendszer családját kiválasztani, nekünk pusztán csak verziót kell beállítanunk): adjuk meg Solaris11-et névként és válasszuk a 64bites változatot (feltéve, hogy gazdagépünk támogatja ezt) 4. Telepítéshez és a kezdeti lépések megtételéhez 1536MB memória tökéletesen megfelel (ez késobb módosítható az elvárások függvényében) 5. Fizikai társaihoz hasonlóan, egyetlen virtuális gép sem létezhet merevlemez (jelen esetben virtuális diszk) nélkül. Használhatunk egy már létezo területet (virtuális lemezt tartalmazó állomány), de létrehozhatunk egy nekünk tetszo új példányt is. Maradjunk ez utóbbinál (Create new hard disk)! 6. A lehetséges formátumok közül - az egyszeruség okán - éljünk a felkínált alaptípussal (VDI - VirtualBox Disk Image). 7. Létrehozás során a virtuális lemez készülhet egyidejuleg (Fixed size), vagy több lépésben dinamikusan (Dynamically allocated). Az elso változat sokkal kevésbé terheli a rendszert, a második elonye pedig a helytakarékosság. Válasszuk a fix méretu változatot. 8. Most már csak egyetlen adat ismeretlen a VirtualBox számára, mégpedig a létrehozásra kerülo virtuális lemez nagysága. 8GB-os terület jelen esetben alkalmas az ismerkedés elkezdéséhez. 9. Amennyiben minden beállítást helyesen adtunk meg, a Create gomb megnyomása után elindul a virtuális lemez létrehozása. 10. Ez a muvelet a megadott adatoktól függoen néhány perc alatt befejezodik. 11. Hasonló megerosítés (Create gomb aktiválása) után elkezdodik a kért virtuális gép létrehozása is. 12. Sikeres végrehajtás után az új vitruális gép közvetlenül megjelenik a központi ablak baloldali listáján a rendelkezésre álló virtuális gépek közt. A blog bejegyzés folyamatosan frissül...a rész fennmaradó tartalma hamarosan felkerül az oldalra.

    Read the article

  • How to share problem solving knowledge in a multiteam group?

    - by jonathan
    I've been working in multiteam groups for as long as I'm a webdeveloper, for me a team can be a lonely soldier or several people, generally a company will have multiple teams working in different projects and once the project is out in the wild, any team can perform the maintenance. This is a small picture since I'm not talking only about project wise knowledge, but "craft wise" knowledge, but it gives the picture of how I'm used to work, so: Since we work on modularised teams, sometimes I feel like the teams are too tightly enclosed in their projects, I've seen cases where after an hour of discussion, someone asked the question aloud and other person totally unrelated answered in a much simpler fashion. The problem is not so simple to solve as people tend not to be available all the time, also sometimes people can't afford the time to go through a problem with the "asker", but could do it alone. I've thought about software based solutions, something in the lines of SE, but I'd like to know other programmers opinions on the subject. EDIT I don't know if this is a wikipedia complex, but I feel that Wikis don't encourage the user to actually ask questions, but rather to write articles, and sometimes we don't know the knowledge we need, before needing it.

    Read the article

  • How do bug reports factor in to a sprint?

    - by Mark Ingram
    I've been reading up on Scrum recently. From my understanding, a meeting is held before the sprint starts, to decide what gets moved from the product backlog to the upcoming sprint backlog. Once a feature is completed in the current sprint, it will go into the "Ready to QA" bucket, and it's at this point that I'm getting confused. Do bug reports go back into the product backlog? I assume they can't go back into the sprint backlog as we've already decided what work will be done for this cycle? What happens when QA finds a bug? Where does it go?

    Read the article

  • How to let screen time out on sign in screen

    - by Aren Cambre
    I have Ubuntu 13.10 set up on an older PC. If I am logged in as a user, then the screen timeout and power saving mode works as expected. However, if nobody is logged in, the screen never times out, and the monitor stays on all the time with the login screen. How can I adjust Ubuntu 13.10 so that the login screen also times out after a minute or so? I don't want the monitor's power saving mode to be disabled just because nobody is currently logged in.

    Read the article

  • No NFC for the iPhone, and here's why

    - by David Dorf
    I, like many others in the retail industry, was hoping the iPhone 5 would include an NFC chip that enabled a mobile wallet.  In previous postings I've discussed the possible business case and the foreshadowing of Passbook, but it wasn't meant to be.  A few weeks ago I was considering all the rumors, and it suddenly occurred to me that it wasn't in Apple's best interest to support an NFC chip.  Yes they have patents in this area, but perhaps they are more defensive than indicating new development. Steve Jobs wanted to always win, but more importantly he didn't want others to win at his expense.  It drove him nuts that Windows was more successful than MacOS, and clearly he was bothered by Samsung and other handset manufacturers copying the iPhone.  But he was most angry at Google for their stewardship of Android. If the iPhone 5 had an NFC chip, who would benefit most?  Google Wallet is far and away the leader in NFC-based payments via mobile phones in the US.  Even without Steve at the helm, Apple isn't going to do anything to help Google.  Plus Apple doesn't like to do things in an open way -- then they lose control.  For example, you don't see iPhones with expandable memory, replaceable batteries, or USB connectors.  Adding a standards-based NFC chip just isn't in their nature. So I don't think Apple is holding back on the NFC chip for the 5S or 6.  It just isn't going to happen unless they can figure out how to prevent others from benefiting from it. All the other handset manufacturers will use NFC as a differentiator, which may be enough to keep Google and Isis afloat, and of course Square and PayPal aren't betting on NFC anyway.  This isn't the end of alternative payments, its just a major speed bump.

    Read the article

< Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >