Search Results

Search found 65206 results on 2609 pages for 'real time'.

Page 661/2609 | < Previous Page | 657 658 659 660 661 662 663 664 665 666 667 668  | Next Page >

  • DIY Carbonator Creates Pop Rocks Like Fizzy Fruit [Science]

    - by Jason Fitzpatrick
    If you’ve ever sat around wishing that scientists would stop wasting time trying to solve pressing global problems and instead genetically engineer a bizarre but delicious hybrid of Pop Rocks candy and wholesome fruit, this mad scientist experiment is for you. Over at Evil Mad Scientist Laboratories they share a really fun weekend project. Contributor Rich Faulhaber was looking for a way to make eating fruit extra fun and science-infused for his kids. His solution? Build a homemade carbon dioxide injector that infuses fruit with carbonation. Having trouble imagining that? Envision a bowl of strawberries where every strawberry burst into a crazy flurry of strawberry flavor and champagne bubbles every time you bit into it. Fizzy fruit! Hit up the link below to see how he took pretty common parts: a C02 tank from a paint ball gun, a water filter canister from the hardware store, and other cheap and readily available parts (with the exception of the gas regulator which he suggests you shop garage sales and surplus stores to find a deal on), and combined them together to create a C02 fruit infuser. Hit up the link below to read more about his setup and the procedure he uses to infuse fruit with carbonation. The C02inator [Evil Mad Scientist Laboratories via Hack a Day] HTG Explains: What Are Character Encodings and How Do They Differ?How To Make Disposable Sleeves for Your In-Ear MonitorsMacs Don’t Make You Creative! So Why Do Artists Really Love Apple?

    Read the article

  • How do you keep cool when production system goes down?

    - by Mag20
    This has happened to most of us... You come to work one day. Everything seems normal: the sun is shining, birds are chirping, but you notice a couple of weird things on your way to work like deja vu with cat in matrix. You get into office, there are a lot of phones ringing, but could be that they are just doing a new sales promotion. You settle in, when you notice a dark cloud hovering over you. It takes you a couple of moments, but you recognize the cloud is your boss. Usually he checks on you every morning with his "Soooo Peeeeter, how about those TCP/IP reports?" routine, but today he forgot everything about common manners and rudely invaded your personal space. No "Good Morning", just some drooling, grunts and curses. He reminds you a bit of neanderthal who is trying to get away from cyber tooth tiger, fear and panic all compressed in a tight ball. You try to decipher the new language that he created since yesterday and you start understanding that something bad happened overnight - production system went down. Now, your system is usually used by clients during regular working hours from 9-5, but for whatever reason you didn't get any alerts on your beeper (for people under 30 - beeper was like a mobile phone that could only ring and tell you who beeped you). Need to remember to charge it next time. So it is 8:45am, the system MUST be up at 9am. Every 10 seconds, your boss lets out yet another curse which communicates to you that another customer is having problems getting into the system. Also several account managers are now hovering over your boss trying to make him understand how clients are REALLY REALLY suffering. Everyone is depending on you to get the system up ASAP and at the same time hinder your progress by constantly distracting you. How do you keep cool in a situation like this?

    Read the article

  • OOW 2012: Kings of Leon & Pearl Jam - Appreciation Event

    - by Mike Dietrich
    June 15, 1992 - that was actually the day when Pearl Jam played their first concert in Serenadenhof in my hometown, Nürnberg. Oups ... that's over 20 years ago ... So I was so happy to get a ticket to this year's OOW 2012 appreciation event on Treasure Island. Every year it amazes me over and over again how the organizers manage it logistically to bring almost 40,000 people to and back from the island. Food was ... I would say fairly ok ... and beer (as always) is not - actually even though I'm not a beer drinker I wouldn't call it beer.  Kings of Leon did start. I like them a lot and owe their 2008 album Only By Night. That was a good start to warm up the crowd. And then Pearl Jam took over - and ... wooooooow ... they are such a great live band. First of all as far as I understood they were donating the money they've got for that gig to an NGO. And Eddie Vedder's voice is simply striking ... I had shivers running down my spine. They played a good excerpt of their +20 years career closing down with Alive at the very end. It seemed to everybody that the band had real fun playing there - and it was sooooo good. Thanks a lot to the person who did organize me a ticket Catching my bus back to my hotel area down at Fisherman's Wharf worked well - but I must have fallen asleep 5 minutes after we've left the parking lot. The next thing I did recognize was the bus driver pushing the breaks at Northpoint. What a wonderful night ...

    Read the article

  • Common Live Upgrade problems

    - by user12611829
    As I have worked with customers deploying Live Upgrade in their environments, several problems seem to surface over and over. With this blog article, I will try to collect these troubles, as well as suggest some workarounds. If this sounds like the beginnings of a Wiki, you would be right. At present, there is not enough material for one, so we will use this blog for the time being. I do expect new material to be posted on occasion, so if you wish to bookmark it for future reference, a permanent link can be found here. Live Upgrade copies over ZFS root clone This was introduced in Solaris 10 10/09 (u8) and the root of the problem is a duplicate entry in the source boot environments ICF configuration file. Prior to u8, a ZFS root file system was not included in /etc/vfstab, since the mount is implicit at boot time. Starting with u8, the root file system is included in /etc/vfstab, and when the boot environment is scanned to create the ICF file, a duplicate entry is recorded. Here's what the error looks like. # lucreate -n s10u9-baseline Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Creating file systems on boot environment . Creating file system for in zone on . The error indicator ----- /usr/lib/lu/lumkfs: test: unknown operator zfs Populating file systems on boot environment . Checking selection integrity. Integrity check OK. Populating contents of mount point . This should not happen ------ Copying. Ctrl-C and cleanup If you weren't paying close attention, you might not even know this is an error. The symptoms are lucreate times that are way too long due to the extraneous copy, or the one that alerted me to the problem, the root file system is filling up - again thanks to a redundant copy. This problem has already been identified and corrected, and a patch (121431-58 or later for x86, 121430-57 for SPARC) is available. Unfortunately, this patch has not yet made it into the Solaris 10 Recommended Patch Cluster. Applying the prerequisite patches from the latest cluster is a recommendation from the Live Upgrade Survival Guide blog, so an additional step will be required until the patch is included. Let's see how this works. # patchadd -p | grep 121431 Patch: 121429-13 Obsoletes: Requires: 120236-01 121431-16 Incompatibles: Packages: SUNWluzone Patch: 121431-54 Obsoletes: 121436-05 121438-02 Requires: Incompatibles: Packages: SUNWlucfg SUNWluu SUNWlur # unzip 121431-58 # patchadd 121431-58 Validating patches... Loading patches installed on the system... Done! Loading patches requested to install. Done! Checking patches that you specified for installation. Done! Approved patches will be installed in this order: 121431-58 Checking installed patches... Executing prepatch script... Installing patch packages... Patch 121431-58 has been successfully installed. See /var/sadm/patch/121431-58/log for details Executing postpatch script... Patch packages installed: SUNWlucfg SUNWlur SUNWluu # lucreate -n s10u9-baseline Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. INFORMATION: Unable to determine size or capacity of slice . Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. INFORMATION: Unable to determine size or capacity of slice . Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Cloning file systems from boot environment to create boot environment . Creating snapshot for on . Creating clone for on . Setting canmount=noauto for in zone on . Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. File propagation successful Copied GRUB menu from PBE to ABE No entry for BE in GRUB menu Population of boot environment successful. Creation of boot environment successful. This time it took just a few seconds. A cursory examination of the offending ICF file (/etc/lu/ICF.3 in this case) shows that the duplicate root file system entry is now gone. # cat /etc/lu/ICF.3 s10u8-baseline:-:/dev/zvol/dsk/panroot/swap:swap:8388608 s10u8-baseline:/:panroot/ROOT/s10u8-baseline:zfs:0 s10u8-baseline:/vbox:pandora/vbox:zfs:0 s10u8-baseline:/setup:pandora/setup:zfs:0 s10u8-baseline:/export:pandora/export:zfs:0 s10u8-baseline:/pandora:pandora:zfs:0 s10u8-baseline:/panroot:panroot:zfs:0 s10u8-baseline:/workshop:pandora/workshop:zfs:0 s10u8-baseline:/export/iso:pandora/iso:zfs:0 s10u8-baseline:/export/home:pandora/home:zfs:0 s10u8-baseline:/vbox/HardDisks:pandora/vbox/HardDisks:zfs:0 s10u8-baseline:/vbox/HardDisks/WinXP:pandora/vbox/HardDisks/WinXP:zfs:0 Solaris 10 9/10 introduces new autoregistration file This one is actually mentioned in the Oracle Solaris 9/10 release notes. I know, I hate it when that happens too. Here's what the "error" looks like. # luupgrade -u -s /mnt -n s10u9-baseline System has findroot enabled GRUB No entry for BE in GRUB menu Copying failsafe kernel from media. 61364 blocks miniroot filesystem is Mounting miniroot at ERROR: The auto registration file does not exist or incomplete. The auto registration file is mandatory for this upgrade. Use -k argument along with luupgrade command. autoreg_file is path to auto registration information file. See sysidcfg(4) for a list of valid keywords for use in this file. The format of the file is as follows. oracle_user=xxxx oracle_pw=xxxx http_proxy_host=xxxx http_proxy_port=xxxx http_proxy_user=xxxx http_proxy_pw=xxxx For more details refer "Oracle Solaris 10 9/10 Installation Guide: Planning for Installation and Upgrade". As with the previous problem, this is also easy to work around. Assuming that you don't want to use the auto-registration feature at upgrade time, create a file that contains just autoreg=disable and pass the filename on to luupgrade. Here is an example. # echo "autoreg=disable" /var/tmp/no-autoreg # luupgrade -u -s /mnt -k /var/tmp/no-autoreg -n s10u9-baseline System has findroot enabled GRUB No entry for BE in GRUB menu Copying failsafe kernel from media. 61364 blocks miniroot filesystem is Mounting miniroot at ####################################################################### NOTE: To improve products and services, Oracle Solaris communicates configuration data to Oracle after rebooting. You can register your version of Oracle Solaris to capture this data for your use, or the data is sent anonymously. For information about what configuration data is communicated and how to control this facility, see the Release Notes or www.oracle.com/goto/solarisautoreg. INFORMATION: After activated and booted into new BE , Auto Registration happens automatically with the following Information autoreg=disable ####################################################################### Validating the contents of the media . The media is a standard Solaris media. The media contains an operating system upgrade image. The media contains version . Constructing upgrade profile to use. Locating the operating system upgrade program. Checking for existence of previously scheduled Live Upgrade requests. Creating upgrade profile for BE . Checking for GRUB menu on ABE . Saving GRUB menu on ABE . Checking for x86 boot partition on ABE. Determining packages to install or upgrade for BE . Performing the operating system upgrade of the BE . CAUTION: Interrupting this process may leave the boot environment unstable or unbootable. The Live Upgrade operation now proceeds as expected. Once the system upgrade is complete, we can manually register the system. If you want to do a hands off registration during the upgrade, see the Oracle Solaris Auto Registration section of the Oracle Solaris Release Notes for instructions on how to do that. Technocrati Tags: Oracle Solaris Patching Live Upgrade var sc_project=1193495; var sc_invisible=1; var sc_security="a46f6831";

    Read the article

  • Exadata X3 In-Memory Database Machine: To be or not to be

    - by Luis Moreno Campos
    Since Larry Ellison announced Oracle Exadata X3 as the new generation of the Database Machine, he established the product in the In-Memory Database arena. And that annoyed some people. We all know that In-Memory Databases are the ones that *only* execute in memory and use the other layers of storage for persistency (mainly disk). Oracle database has always been a technology that uses memory as a caching mechanism and that hasn't change nor it will change with Oracle Database 12c. So this is the central point of fuss when it comes to announcing an Engineered Systems as In-Memory Database, when in fact it still runs Oracle Database, not vanilla but still the same product. Let me tell you purist people out there: when you find no new ground breaking point to get all excited about you decide to bash it, and go against its claims. It's not like a car manufacturer that launches a mini-van in the market and calls it a Sports Car, we are talking about a fundamental change in the ILM stack: level 2 of caching is now self sufficient. It's not DRAM? Who cares, still let's you put in flash amounts of data not done up until now, so I guess Oracle can name it whatever Larry wants because in the end it's something never done before. Now let's imagine that you hop on the pure In-Memory Database bandwagon. You would be stuck with a database technology that lags behind the Oracle Database hundreds of light years in man/hours innovations and features. Do you really want to travel back in time? Remember, the first rule about time travelling is that "Security is not Guaranteed". Your choice. LMC

    Read the article

  • SAP or Navision? Career Path

    - by codebased
    This could be tricky to ask; I may or may not ask this question here but I thought to give it a try. I've been in Software Industry since 2002 and now it has been a time that I'm at Senior level where I normally code, lead and define the architect; giving technical solutions to the management is one of my asset that I've earned during my services. Now it is the time to define the road map for the future, $$$. I am not in favor of Project Management roles. I've been thinking of going through the ERP and my current company does provide me an option to go for Navision/ Microsoft Dynamics. They are currently on 4.0 but they are planning to move for 2009 and also to build one of their own plug-in. Indeed the option is good because Microsoft is trying to accomplish the market for Dynamics products. However, they have less success in Australia. Now, Another option is with SAP where person can go with 200 K $ a year. Where as I'd doubt that if the same kind of growth, financial, is available for Microsoft geek. What is your opinion on Navision or SAP? If I try to completely move to SAP it could be bit challenging as market will consider me a fresher. However the return is quite good. Where in case of Microsoft, I think technology changes so fast that there is a less chance to grow in, within, the same experience; in other word, if any new framework comes in .net then market look for that person who knows this new framework and not .net But in case of SAP, where the base remain same and chances are to grab more money from the market. What would you do if you were me? In stackoverflow - Navision questions are 20+ where in SAP 200+///?? :-)

    Read the article

  • [Dear Recruiter] I developed in Mo'Fusion

    - by refuctored
    Forward: Sometimes I really feel like technology recruiters have no experience or knowledge of the field they are recruting for.  A warning to those companies hiring technical recruiters -- ensure that the technical recruiters you hire to fill a position are actually technical.  Here's proof below, where I make up completely ridiculous technologies, but still have interest from the recruiter for an interview. Letter to me: Hello - Your name came up as a possible match for a long term contract Cold Fusion Developer role I have in Bothell, WA.  This role requires you to be onsite in Bothell, WA. This is  a tough role to fill so I was hoping you might have someone you can recommend? Unfortunately no telecommute. Thank you! Sincerly, Mindy Recruiter My response: Mindy -- Wow I'm super-excited that you took the time to contact me about this position!  Let me tell you, you won't be disappointed with my skill set! Firstly, I've been developing in ColdFusion since 1993 before it was owned by Adobe and it was operating under code name, "Hot-Jack".  Recently I started developing under the Domain-View-Driven-Domain-Model (DVDDM), integrating client-side CF on Moobuntu.  Not only do I have a boat load of ColdFusion EXP,  I also have a ton of experience in the open source communities lesser known derivative of CF, Mo'Fusion (MF).  I've also invested thousands of hours of my time learning esoteric programming languages. Look forward to working with you! George And her response: Hi George – just left you a message. Give me a call at your convenience.  The role does require someone to be onsite here.. are you able to relocate yourself? Mindy [Sigh]

    Read the article

  • Advantages and Disadvantages of the Waterfall Methodology

    In my personal opinion I believe the waterfall method is one of the worst methodologies to use when developing larger systems because it leaves is no room for mistakes. As the name implies the waterfall methodology does not allow  for projects to go back up stream to recover from design errors, missing and/or limited requirements. In addition, hidden bugs are not usually found until the testing phase. This can prove to be very costly and time consuming to the developer and the client. According to NCycles.com, the waterfall methodology structures a project into separate stages with defined deliverables from each phase. Define Design Code Test Implement Document and Maintain The advantages found by Ncycle.com to this methodology are: Ease in analyzing potential changes  Ability to coordinate larger teams, even if geographically distributed Can enable precise dollar budget Less total time required from Subject Matter Experts The disadvantages found by Ncycle.com to this methodology are: Lack of flexibility Hard to predict all needs in advance Intangible knowledge lost between hand-offs Lack of team cohesion Design flaws not discovered until the Testing phase References: NCycles.com  (2002). Retrieved from http://www.ncycles.com/e_whi_Methodologies.htmmethodology on April 17, 2009

    Read the article

  • SQL SERVER – Tell me What You Want to Listen – My 2 TechED 2011 Sessions

    - by pinaldave
    I am going to present two sessions at TechEd India on March 25th, 2011. I would like to know what do you want me to cover in this session. Watch the video taken by my wife when I was preparing for the session. Sessions Date: March 25, 2011 Understanding SQL Server Behavioral Pattern – SQL Server Extended Events Date and Time: March 25, 2011 12:00 PM to 01:00 PM SQL Server Waits and Queues – Your Gateway to Perf. Troubleshooting Date and Time: March 25, 2011 04:15 PM to 05:15 PM I promise following for both of my sessions: I will share the scripts demonstrated in the session right at the end of the sessions The sessions will be 300-400 level but I promise to make the concept very simple Less slides and lots of meaningful Demos Session close to real life cases and scenarios Surprise gifts to best participants I promise to answer all the questions either in session or right after the hall after the session Lots of Technical Education and FUN! Please leave your comments with your expectation and if you are going to attend the session do let me know here. We will for sure meet at the event and do some interesting talk. You can read the abstract of the session over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • Building apps that work Together

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/07/03/building-apps-that-work-together.aspx  Writing apps that stand alone will only get yon so far.  If your app can allow the user to leverage other applications and share data you Can have a real winner on your hands. Jake Sabulsky started off by explaining that you should be concentrating on the core functionality of your app and letting the framework take care of the features that users require these days.  This is implemented be leveraging contracts.  When Windows 8 was released it included the File, Share and Pickers contracts.  With the release of Windows 8.1 they have added the Contacts and Calendar contracts. There have been a number of improvements to the original contracts. The File URI contract will now automatically detect the size that a new windows should be opened and will also allow you to programmatically influence new window size.  The Share contract has been enhanced by allowing apps to always share screenshots and links to the app in the store. To my thinking the contracts are one of the most powerful features of Windows 8.  Take the time view this session and learn how to leverage them. Technorati Tags: BUILD 2013,Windows 8,Live tiles

    Read the article

  • Calculating the "power" of a player in a "Defend Your Castle" type game

    - by Jesse Emond
    I'm a making a "Defend Your Castle" type game, where each player has a castle and must send units to destroy the opponent's castle. It looks like this (and yeah, this is the actual game, not a quick paint drawing..): Now, I'm trying to implement the AI of the opponent, and I'd like to create 4 different AI levels: Easy, Normal, Hard and Hardcore. I've never made any "serious" AI before and I'd like to create a quite complete one this time. My idea is to calculate a player's "power" score, based on the current health of its castle and the individual "power" score of its units. Then, the AI would just try to keep a score close to the player's one(Easy would stay below it, Normal would stay near it and Hard would try to get above it). But I just don't know how to calculate a player's power score. There are just too many variables to take into account and I don't know how to properly use them to create one significant number(the power level). Could anyone help me out on this one? Here are the variables that should influence a player's power score: Current castle health, the unit's total health, damage, speed and attack range. Also, the player can have increased Income(the money bag), damage(the + Damage) and speed(the + speed)... How could I include them in the score? I'm really stuck here... Or is there an other way that I could implement AI for this type of game? Thanks for your precious time.

    Read the article

  • BizTalk: Dynamic SMTP Port: Unknown Error Description

    - by Leonid Ganeline
    Today I investigated one strange error working with Dynamic SMTP Port.   Event Type: Error Event Source: BizTalk Server 2006 Event Category: BizTalk Server 2006 Event ID: 5754 Date: ******** Time: ********AM User: N/A Computer: ******** Description: A message sent to adapter "SMTP" on send port "*********" with URI "mailto:********.com" is suspended. Error details: Unknown Error Description  MessageId:  {********} InstanceID: {********}   My code was pretty simple and the source of the error was hidden somewhere inside it.   msg_MyMessage(SMTP.CC) = var_CC; msg_MyMessage(SMTP.From) = var_From; msg_MyMessage(SMTP.Subject) = var_Subject; msg_MyMessage(SMTP.EmailBodyText) = var_Message;    // #1    msg_MyMessage(SMTP.SMTPHost) = " localhost "; msg_MyMessage(SMTP.SMTPAuthenticate) = 0; When I added line #2, this frustrating error disappeared.    msg_MyMessage(SMTP.EmailBodyTextCharset) = "UTF-8"; // #2 Conclusion: If we use the SMTP.EmailBodyText property, we must set up the SMTP.EmailBodyTextCharset property. To me it looks like a bug in BizTalk. [Maybe it is "by design", but in this case give us a useful error text!!!] And don't ask me how much time I've spent with this investigation.

    Read the article

  • PowerShell & SQL Compare

    - by Grant Fritchey
    Just a quick blog post to share a couple of scripts for using PowerShell to call SQL Compare. This is an example from my session at SQL in the City on setting up a sandbox development process. This just runs a compare between a set of scripts and a database and deploys it. set-Location “c:\Program Files (x86)\Red Gate\SQL Compare 10\”; ./sqlcompare /s2:DOJO /db2:MovieManagement_Sandbox /sourcecontrol1 /vu1:grant /vp1:12345 /r1:HEAD /sfx:scripts.xml /sync /mfx:migrations.xml /verbose; I would not recommend using the /verbose output for real automation, but I’m showing off how the tool works. This particular script does a compare straight from source control to a database on my server. You can use variables where I’ve hard coded. That’s it. Works great. Just wanted to share it out there. I have others that I’ll track down and put up here.  

    Read the article

  • Would you refactor this and if so, would you charge your client?

    - by Julius
    I am working on a freelance job at home. The client wants me to write some new functionality for his CMS, but it is taking me a lot of time to figure out what the code is doing, because it is written in a very unreadable style. Below is just an example of what I mean. The previous programmer made extensive use of anonymous functions, of eval(), he uses deeply nested ternary operators, he didn't indent code, didn't use comments, and he uses funny constructions like misusing the behaviour of logical operators || and && for creating if/else conditions (the second condition of && only gets tested if the first one is true, opening the possibility to use && as an if/else construction). All in all it's insane code and it's costing me a lot of time to find out how the current code works. return ($this->main->context != "ajax" || in_array($this->type, $this->definition->ajax)) ? eval('return method_exists($this,"Show'.ucfirst($this->type).'") ? $this->Show'.ucfirst($this->type).'('.(count($args) ? join(",",array_map(create_function('$a','return (is_numeric($a) || preg_match("/^array/",$a)) ? $a : "\"".$a."\"";'),$args)) : "").') : null;') : ''; Would you refactor this code and how would you handle this sort of thing with your client, I mean financially?

    Read the article

  • How do I account for changed or forgotten tasks in an estimate?

    - by Andrew
    To handle task-level estimates and time reporting, I have been using (roughly) the technique that Steve McConnell describes in Chapter 10 of Software Estimation. Specifically, when the time comes for me to create task-level estimates (right before coding begins on a project), I determine the tasks at a fairly granular level so that, whenever possible, I have no tasks with a single-point, 50%-confidence estimate greater than four hours. That way, the task estimation process helps with constructing the software while helping me not to forget tasks during estimation. I come up with a range of hours possible for each task also, and using the statistical calculations that McConnell describes along with my historical accuracy data, I can generate estimates at other confidence levels when desired. I feel like this method has been working fairly well for me. We are required to put tasks and their estimates into TFS for tracking, so I use the estimates at the percentage of confidence I am told to use. I am unsure, however, what to do when I do forget a task, or I end up needing to do work that does not neatly fall within one of the tasks I estimated. Of course, trying to avoid this situation is best, but how do I account for forgotten/changed tasks? I want to have the best historical data I can to help me with future estimates, but right now, I basically am just calculating whether I made the 50%-confidence estimate and whether I made it inside the ranged estimate. I'll be happy to clarify what I'm asking if needed -- let me know what is unclear.

    Read the article

  • Nvidia API mismatch

    - by Oli
    I had planned a day of relaxing with Portal 2 but on starting Steam (for the first time in a couple of weeks) I was greeted with the following message in the terminal: Error: API mismatch: the NVIDIA kernel module has version 270.41.19, but this NVIDIA driver component has version 270.41.06. Please make sure that the kernel module and all NVIDIA driver components have the same version. I'll confess I don't really know what it's talking about when it says driver. The verion of nvidia-current is 270.41.19. I thought that was the driver and module, all in one. I use the X-SWAT PPA and I have noted that the nvidia-settings package has boosted to 275.09.07. As this is just a settings application, I don't think this mismatch has anything to do with this. It's also not the same version as the problem being described. I'd rather not purge back to the standard Nvidia driver as it's less than stable on my GTX580. I would accept an answer that takes the manual setup and makes it recompile when the kernel recompiles (ie, some DKMS wizardry) but it has to work. I don't want to drop back to text-mode every time I restart after a kernel upgrade. Edit: Minecraft works without a single complaint about driver versions. Penumbra dies with roughly the same error when entering a game.

    Read the article

  • In the days of modern computing, in 'typical business apps' - why does performance matter?

    - by Prog
    This may seem like an odd question to some of you. I'm a hobbyist Java programmer. I have developed several games, an AI program that creates music, another program for painting, and similar stuff. This is to tell you that I have an experience in programming, but not in professional development of business applications. I see a lot of talk on this site about performance. People often debate what would be the most efficient algorithm in C# to perform a task, or why Python is slow and Java is faster, etc. What I'm trying to understand is: why does this matter? There are specific areas of computing where I see why performance matters: games, where tens of thousands of computations are happening every second in a constant-update loop, or low level systems which other programs rely on, such as OSs and VMs, etc. But for the normal, typical high-level business app, why does performance matter? I can understand why it used to matter, decades ago. Computers were much slower and had much less memory, so you had to think carefully about these things. But today, we have so much memory to spare and computers are so fast: does it actually matter if a particular Java algorithm is O(n^2)? Will it actually make a difference for the end users of this typical business app? When you press a GUI button in a typical business app, and behind the scenes it invokes an O(n^2) algorithm, in these days of modern computing - do you actually feel the inefficiency? My question is split in two: In practice, today does performance matter in a typical normal business program? If it does, please give me real-world examples of places in such an application, where performance and optimizations are important.

    Read the article

  • Hardware compatibility on H97 chipset/hardware support

    - by user3238850
    I am aware that there is documentation about compatibility but it is way out dated. I am also aware that there is a hardware compatibility page on Ubuntu website, but that one is focused on the whole box rather than a single piece of hardware. I have some experience with Linux OS, and some experience playing Ubuntu Server in a virtual machine, but never worked on a machine that lives in the real internet. I am building a home server with an Intel H97 chipset motherboard. I have looked at several models and none of them has Linux in the supported OS category. I have the experience of installing Ubuntu Desktop 14.04 on my 4-years-old lap top, and except for some system errors on start up, there is not too much I can complain about, so I guess I should be fine. However, this time I am going to install Ubuntu Server 14.04 on a relatively new piece of hardware(I went to http://linux-drivers.org/ but found nothing really helpful). For example the ASUS motherboard has M.2 socket and Intel LAN I218V chip, the Gigabyte motherboard has two LAN chips(Intel LAN WGI217V and ATHEROS AR8161-BL3A-R). So I really want to make sure everything will work. Usually I would just trust Ubuntu and buy all hardware I need, but basing on my past experience with the Ubuntu Desktop version on my lap top, I am not so convinced. There is an easily noticeable difference: when the system is idle, the fan runs much more frequently and longer under Ubuntu. This leads to my suspicion that generally hardware will have worse support for Ubuntu, which is no surprising at all but enough for me to put this post here. And as far as I know, some Intel CPU features come with software that usually will not run under Linux. Any help, idea or thoughts would be greatly appreciated!

    Read the article

  • Desktop forgets theme?

    - by Marcelo Cantos
    I am running Ubuntu in VirtualBox (on a Windows 7 host). Several times now, the top-level menu bar, the task bar — and seemingly every system dialog — have forgotten the out-of-the-box "Ambiance" theme they conform to when I first installed the system. Window captions still preserve the theme, but pretty much nothing else does. I have searched high and low on Google for assistance with this problem. Everything I've found suggests either running some gconf reset or deleting .gconf* .gnome* and other similar directories. I have followed all this advice and nothing works. I still get a boring Windows-95-style gray 3D look and feel. On previous occasions, after much messing around I've given up and rebooted the VM instance, and been pleasantly suprised to see the original "Ambience" theme restored throughout the UI, but invariably it disappears again some time later, usually after a reboot, so I can never figure out what I did that broke it. Here's a sample from Ubuntu's site of what I want it to look like. And here's a screenshot of my system as it currently looks. Also note that my GNOME Terminals normally have a nice purple semi-translucent look, and as can be seen from the screenshot, they are now just a solid matte white. This last time (just yesterday), trying numerous combinations all the usual tricks and rebooting several times hasn't fixed it, so here I am on SU wondering: How do I recover the out-of-the-box theme for my Gnome/Ubuntu desktop, noting that blowing away all config files — as suggested in many places online — fails to achieve this? It might help to know that it seems to fail either after I resize the VM instance, forcing the Ubuntu desktop to resize itself, or after I play around with Compiz settings. I haven't been able to figure out which of these it is, and it could be neither. Given the amount of pain I have had to go through to get things back to normal (and given that I am at a loss as to how to do so), it has proven difficult to definitively isolate the cause.

    Read the article

  • What is the correct way to restart udev in Ubuntu?

    - by zerkms
    I've changed the name of my eth1 interface to eth0. How to ask udev now to re-read the config? service udev restart and udevadm control --reload-rules don't help. So is there any valid way except of rebooting? (yes, reboot helps with this issue) UPD: yes, I know I should prepend the commands with sudo, but either one I posted above changes nothing in ifconfig -a output: I still see eth1, not eth0. UPD 2: I just changed the NAME property of udev-rule line. Don't know any reason for this to be ineffective. There is no any error in executing of both commands I've posted above, but they just don't change actual interface name in ifconfig -a output. If I perform reboot - then interface name changes as expected. UPD 3: let I explain all the case better ;-) For development purposes I write some script that clones virtual machines (VirtualBox-driven) and pre-sets them up in some way. So I perform a command to clone VM, start it and as long as network interface MAC is changed - udev adds the second rule to network persistent rules. Right after machine is booted for the first time there are 2 rules: eth0, which does not exist, as long as it existed in the original VM image MAC eth1, which exists, but all the configuration in all files refers to eth0, so it is not that good for me So I with sed delete the line with eth0 (it is obsolete and useless in cloned image) and replace eth1 with eth0. So currently I have valid persistent rule, but there is still eth1 in /dev. The issue: I don't want to reboot the machine (it will take another time, which is not good thing on building-VM-stage) and just want to have my /dev rebuilt with some command so I have ready-to-use VM without any reboots.

    Read the article

  • Javascript: Machine Constants Applicable?

    - by DavidB2013
    I write numerical routines for students of science and engineering (although they are freely available for use by anybody else as well) and am wondering how to properly use machine constants in a JavaScript program, or if they are even applicable. For example, say I am writing a program in C++ that numerically computes the roots of the following equation: exp(-0.7x) + sin(3x) - 1.2x + 0.3546 = 0 A root-finding routine should be able to compute roots to within the machine epsilon. In C++, this value is specified by the language: DBL_EPSILON. C++ also specifies the smallest and largest values that can be held by a float or double variable. However, how does this convert to JavaScript? Since a Javascript program runs in a web browser, and I don't know what kind of computer will run the program, and JavaScript does not have corresponding predefined values for these quantities, how can I implement my own version of these constants so that my programs compute results to as much accuracy as allowed on the computer running the web browser? My first draft is to simply copy over the literal constants from C++: FLT_MIN: 1.17549435082229e-038 FLT_MAX: 3.40282346638529e+038 DBL_EPSILON: 2.2204460492503131e-16 I am also willing to write small code blocks that could compute these values for each machine on which the program is run. That way, a supercomputer might compute results to a higher accuracy than an old, low-level, PC. BUT, I don't know if such a routine would actually reach the computer, in which case, I would be wasting my time. Anybody here know how to compute and use (in Javascript) values that correspond to machine constants in a compiled language? Is it worth my time to write small programs in Javascript that compute DBL_EPSILON, FLT_MIN, FLT_MIN, etc. for use in numerical routines? Or am I better off simply assigning literal constants that come straight from C++ on a standard Windows PC?

    Read the article

  • Oracle and Cavium to work together on Java SE 8 on 64-bit ARMv8

    - by Henrik Stahl
    We have been working for some time on a standard Oracle JDK 8 port to the upcoming introduction of 64-bit servers based on the new ARMv8 micro architecture. At ARM TechCon 2013 in Santa Clara, California, we announced a roadmap with an expected GA in 2015. This project is going very well and is ahead of schedule. We will soon be at the point where we will make binaries available outside of Oracle - first in a managed beta program with select customers/partners, and sometime during the fall of 2014 as a public early access program. Unless something changes, we are looking at a early 2015 GA. We should be able to share a detailed ramp down and GA plan by JavaOne 2014. One of the things we (obviously) need to produce a high-quality port is hardware for development and QA. We are therefore happy to announce that we will be collaborating with Cavium on this project. Cavium has been a supporter of the Java ecosystem for a long time and we have numerous joint customers running various Java versions on Cavium MIPS and ARM-based hardware. Cavium has now agreed to provide us with development hardware and engineering resources so that we can certify and optimize the initial Oracle JDK 8 release on Cavium's ThunderX hardware. This is expected to improve quality and performance of JDK 8 on ARMv8 in general, as well as on Cavium's hardware. For more information: Cavium announcement on the ThunderX product family Cavium announcement on Oracle collaboration As a reminder, we plan to release the Oracle JDK 8 port to 64-bit ARMv8 under the royalty-free (for general purpose servers etc) Binary Code License, but we have no current plans to open source it.

    Read the article

  • ArchBeat Link-o-Rama Top 20 for April 1-9, 2012

    - by Bob Rhubart
    The top 20 most popular items shared via my social networks for the week of April 1 - 8, 2012. Webcast: Oracle Maximum Availability Architecture Best Practices w/Tom Kyte - April 12 Oracle Cloud Conference: dates and locations worldwide Bad Practice Use Case for LOV Performance Implementation in ADF BC | Oracle ACE Director Andresjus Baranovskis How to create a Global Rule that stores a document’s folder path in a custom metadata field | Nicolas Montoya MySQL Cluster 7.2 GA Released How to deal with transport level security policy with OSB | Jian Liang Webcast Series: Data Warehousing Best Practices http://bit.ly/I0yUx1 Interactive Webcast and Live Chat: Oracle Enterprise Manager Ops Center 12c Launch - April 12 Is This How the Execs React to Your Recommendations? | Rick Ramsey Unsolicited login with OAM 11g | Chris Johnson Event: OTN Developer Day: MySQL - New York - May 2 OTN Member discounts for April: Save up to 40% on titles from Oracle Press, Pearson, O'Reilly, Apress, and more Get Proactive with Fusion Middleware | Daniel Mortimer How to use the Human WorkFlow Web Services | Oracle ACE Edwin Biemond Northeast Ohio Oracle Users Group 2 Day Seminar - May 14-15 - Cleveland, OH IOUG Real World Performance Tour, w/Tom Kyte, Andrew Holdsworth, Graham Wood WebLogic Server Performance and Tuning: Part I - Tuning JVM | Gokhan Gungor Crawling a Content Folio | Kyle Hatlestad The Java EE 6 Example - Galleria - Part 1 | Oracle ACE Director Markus Eisele Reminder: JavaOne Call For Papers Closing April 9th, 11:59pm | Arun Gupta Thought for the Day "A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable." — Leslie Lamport

    Read the article

  • How to show or direct a business analyst to a data modelling subject?

    - by AaronLS
    Our business analysts pushed hard to collect data through a spreadsheet. I am the programmer responsible for importing that data. Usually when they push hard for something like this, I never know how well it will work out until a few weeks later when I have time assigned to work on the task of programming the import of the data. I have tried to do as much as possible along the way, named ranges, data validations, etc. But I usually don't have time to take a detailed look at all the data and compare to the destination in the database to determine how well it matches up. A lot of times there will be maybe a little table of items that somehow I have to relate to something else in the database, but there are not natural or business keys present that would allow me to do so. Make the best of this, trying to write something that can compare strings and make a best guess at it and then go through the effort of creating interfaces for a user to match the imported data to the destination. I feel like if the business analyst was actually creating a data model, they would be forced to think about these relationships, and have an appreciation for the need of natural or business keys to be part of the spreadsheet for the purposes of smoothly importing the data. The closest they come to business analysis is a big flat list of fields, and that would be fine if it were like any other data dictionary and include data types+relationships, but it isn't. They are just a bunch of names. No indication of what type of data they might hold, and it is up to me to guess. When I have pushed for more detail, they say that it is just busy work. How can I explain the importance of data modelling? How can I tell them what it is and how to do it? It feels impossible, because they don't have an appreciation for its importance. They do however, usually have an interest in helping out in whatever way they can, it's just this in particular has never gotten a motivated response.

    Read the article

  • Don't Miss the Social Engagement Center -- See How Social Cloud Tools Can Work for You

    - by Oracle OpenWorld Blog Team
    Are you ready to get social at Oracle OpenWorld? Stop by the Oracle Social Engagement Center in Moscone South Upper Lobby (near the South Meetup location) and see Oracle Cloud Social Services in action. Ask Oracle's social experts how they're using next-generation enterprise social tools to deliver extreme engagement. Watch in near real-time as Oracle reaches out to inform, inspire, and engage global communities. We're showing: -     Collective Intellect for specific data sets on 2 large screens-     Vitrue analytics and Vitrue publishing on 2 large screens-     Relative Twitter activity across the hash tags #OOW, #OOW12, #openworld, #oracle, and accounts @oracle, and @openworld on 1 large screenPlus we have 5 computers where we're actively working with the Collective Intellect and Vitrue technologies, so you can how they function. So come visit the Social Engagement Center to learn how Oracle is using and engaging with these tools.  And don't forget the Social Plaza @ OpenWorld event on Tuesday from noon - 8:00 p.m. Join us for food, drink, the afternoon keynote, and some cool libations on a hot afternoon.

    Read the article

< Previous Page | 657 658 659 660 661 662 663 664 665 666 667 668  | Next Page >