Search Results

Search found 26517 results on 1061 pages for 'large directory'.

Page 452/1061 | < Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >

  • This not having job, is driving me nuts!

    - by Ratman21
    I had two jobs lined up (temporary but, hey they pay), one of which was in IT. A very low paying IT job for only 30 hours a week and only 3 weeks. At a large high tech company and a government (a not in my field job) temporary, paying a bit more, for little longer and for up to 40 hours a week. I was going to happily work my little self raged for the next 3 weeks. Guess what, the IT job fell thru and I now feel so let down. This I felt was my chance to get back in to IT, even if it was only for few weeks and maybe get note-us as hard working IT guy. I still have the other job but, let me add that there is no chance that it will turn in to something longer (I have been told that point blank). As I said this is nuts.

    Read the article

  • Webcor Builders Coordinates Construction Schedules and Mitigates Potential Delays More Efficiently with Integrated Project Management

    - by Sylvie MacKenzie, PMP
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} With more than 40 years of commercial construction experience, Webcor Builders is a leading builder of distinguished, high-profile projects, including high-rise condominiums and hotels, laboratories, healthcare centers, and public works projects. Webcor is also known for its award-winning concrete, interior construction, historic restoration, and seismic renovation work. The company has completed more than 50 million square feet of projects to date. Considering the variety and complexity of the construction projects Webcor undertakes, an integrated project management solution is critical to ensuring optimal efficiency and completing client projects on time and on budget. The company previously used a number of scheduling systems for its various building projects. These packages provided different levels of schedule detail and required schedulers, engineers, and other employees to learn multiple systems. From an IT cost and complexity perspective, the company had to manage multiple scheduling systems and pay for multiple sets of licenses. The company looked to standardize on an enterprise project management system, and selected Oracle’s Primavera P6 Enterprise Project Portfolio Management. Webcor uses the solution’s advanced capabilities to schedule complex projects, analyze delays, model and propose multiple scenarios to demonstrate and mitigate delays and cost overruns, and process that information efficiently to deliver the scheduling precision that public and private projects require. In fact, the solution was instrumental in helping the company’s expansion into public sector projects during the recent economic downturn, and with Primavera P6 in place, it can deliver the precise schedule reporting required for large public projects. With Primavera P6 in place, the company could deliver the precise scheduling and milestone reporting capabilities required for large public projects. The solution is in managing the high-profile University of California – Berkeley Memorial Stadium project. Webcor was hired as construction manager and general contractor for the stadium renovation project, which is a fast-paced project located near the seismically active Hayward Fault Zone. Due to the University of California’s football schedule, meeting the Universities deadline for the coming season placed Webcor in a situation where risk awareness and early warnings of issues would be paramount. Webcor and the extended project team needed a solution that could instantly analyze alternate scenarios to mitigate potential delays; Primavera would deliver those answers.The team would also need to enable multiple stakeholders to use an internet-based platform to access the schedule from various locations, and model complicated sequencing requirements where swift decisions would be made to keep the project on track. The schedule is an integral part of Webcor’s construction management process for the stadium project. Rather than providing the client with the industry-standard monthly update, Webcor updates the critical path method (CPM) schedule on a weekly basis. The project team also reviews the schedule and updates weekly to confirm that progress and forecasted performance are accurate. Hired by the University for their ability to deliver in high risk environments The Webcor team was hit recently with a design supplement that could have added up to 70 days to the project. Using Oracle Primavera P6 the team sprung into action analyzing multiple “what if” scenarios to review mitigation means and methods.  Determined to make sure the Bears could take the field in the coming season the project team nearly eliminated the impact with their creative analysis in working the schedule. The total time from the issuance of the final design supplement to an agreed mitigation response was less than one week; leveraging the Oracle Primavera solution Webcor was able to deliver superior customer value With the ability to efficiently manage projects and schedules, Webcor can ensure it completes its projects on time and on budget, as well as inform clients about what changes to plans will mean in terms of delays and additional costs. Read the complete customer case study at :  http://www.oracle.com/us/corporate/customers/customersearch/webcor-builders-1-primavera-ss-1639886.html

    Read the article

  • How can I monitor a website for malicious changes to the files

    - by user41421
    I had an occasion recently where our website was compromised - a link farm was added to a couple of the pages on one occasion, and on another occasion, a large and nasty aspx file was put on the server. I won't mention the host's name (Hostway), but I was pretty annoyed that someone was able to do this. No, it wasn't a leaky password - around 10 sites hosted by HW with consecutive IP addresses got trashed. Anyway. What I need is a utility or service (preferably free) that takes a snapshot of my websites contents, and then regularly monitors the files (size and datestamp) for unauthorized changes or additions, and alerts me. I've used web services that monitor one file for changes, but I'm looking for something a bit more aggressive.

    Read the article

  • Switching songs - MediaPlayer lags the game

    - by Fibericon
    When the player encounters a boss in the game I'm working on, I want to have the music change. It seems simple enough with the MediaPlayer class to fade out the current song, switch to another, and then fade the new song in. However, at the point where the second song starts, the game freezes for a split second. The songs in question aren't particularly large either - the first song is 1.7mb and the second song is 3.1mb, both mp3 format. This is the code I'm using to do it: protected void switchSong(GameTime gameTime) { if (!bossSongPlaying) { MediaPlayer.Volume -= ((float)gameTime.ElapsedGameTime.TotalSeconds/10); if (MediaPlayer.Volume < 0.05f) { MediaPlayer.Play(bossSong); MediaPlayer.Volume = 1.0f; bossSongPlaying = true; } } } What can I do to eliminate that momentary hang?

    Read the article

  • Is it possible to sync two computers without storing the files on a server?

    - by William
    I have a family member currently using Windows live mesh to sync a relatively large amount of files between computers. It is way over the Ubuntu One 5 GB limit and the Live Mesh 2 GB limit. However, Live Mesh gives him the options of syncing all the data he wants without storing it on Microsoft's servers. Does Ubuntu One have an equivalent option, performing just the sync computer-to-computer and not computer-to-server and server-to-computer? Do you have other recommendations? It does not necessarily have to be Ubuntu One, but I need it to be cross platform, working across Windows and Ubuntu. We also have computers outside of uour home network we need to sync to. This is one of the few things keeping him from switching to Ubuntu, and I'd be very grateful for any help.

    Read the article

  • SQLIO Writes

    - by Grant Fritchey
    SQLIO is a fantastic utility for testing the abilities of the disks in your system. It has a very unfortunate name though, since it's not really a SQL Server testing utility at all. It really is a disk utility. They ought to call it DiskIO because they'd get more people using I think. Anyway, branding is not the point of this blog post. Writes are the point of this blog post. SQLIO works by slamming your disk. It performs as mean reads as it can or it performs as many writes as it can depending on how you've configured your tests. There are much smarter people than me who will get into all the various types of tests you should run. I'd suggest reading a bit of what Jonathan Kehayias (blog|twitter) has to say or wade into Denny Cherry's (blog|twitter) work. They're going to do a better job than I can describing all the benefits and mechanisms around using this excellent piece of software. My concerns are very focused. I needed to set up a series of tests to see how well our product SQL Storage Compress worked. I wanted to know the effects it would have on a system, the disk for sure, but also memory and CPU. How to stress the system? SQLIO of course. But when I set it up and ran it, following the documentation that comes with it, I was seeing better than 99% compression on the files. Don't get me wrong. Our product is magnificent, wonderful, all things great and beautiful, gets you coffee in the morning and is made mostly from bacon. But 99% compression. No, it's not that good. So what's up? Well, it's the configuration. The default mechanism is to load up a file, something large that will overwhelm your disk cache. You're instructed to load the file with a character 0x0. I never got a computer science degree. I went to film school. Because of this, I didn't memorize ASCII tables so when I saw this, I thought it was zero's or something. Nope. It's NULL. That's right, you're making a very large file, but you're filling it with NULL values. That's actually ok when all you're testing is the disk sub-system. But, when you want to test a compression and decompression, that can be an issue. I got around this fairly quickly. Instead of generating a file filled with NULL values, I just copied a database file for my tests. And to test it with SQL Storage Compress, I used a database file that had already been run through compression (about 40% compression on that file if you're interested). Now the reads were taken care of. I am seeing very realistic performance from decompressing the information for reads through SQLIO. But what about writes? Well, the issue is, what does SQLIO write? I don't have access to the code. But I do have access to the results. I did two different tests, just to be sure of what I was seeing. First test, use the .DAT file as described in the documentation. I opened the .DAT file after I was done with SQLIO, using WordPad. Guess what? It's a giant file full of air. SQLIO writes NULL values. What does that do to compression? I did the test again on a copy of an uncompressed database file. Then I ran the original and the SQLIO modified copy through ZIP to see what happened. I got better than 99% compression out of the SQLIO modified file (original file of 624,896kb went to 275,871kb compressed, after SQLIO it went to 608kb compressed). So, what does SQLIO write? It writes air. If you're trying to test it with compression or maybe some other type of file storage mechanism like dedupe, you need to know this because your tests really won't be valid. Should I find some other mechanism for testing? Yeah, if all I'm interested in is establishing performance to my own satisfaction, yes. But, I want to be able to compare my results with other people's results and we all need to be using the same tool in order for that to happen. SQLIO is the common mechanism that most people I know use to establish disk performance behavior. It'd be better if we could get SQLIO to do writes in some other fashion. Oh, and before I go, I get to brag a bit. Measuring IOPS, SQL Storage Compress outperforms my disk alone by about 30%.

    Read the article

  • Easiest, most fun way to program 2D games? Flash? XNA? Some other engine?

    - by Maxi
    Hi, this is a post detailing my search for the most enjoyable way for a hobbyist game programmer to sweeten his free time with making a game. My requirements: I looked at Flash first, I made a couple of small games but I'm doubtful of the performance. I would like to make a fairly large strategy game, with several hundred units fighting simultaneously, explosions and animations included. Also zoomable maps. I saw that Adobe has a new 3D API for Flash, but I don't know if that improves 2D performance aswell, I couldn't find anything related to that question on their MAX10 sessions. Would you say that Flash is a good technology for making large 2D games easily? I really like Actionscript, and I love how easy everything is in Flash. There are several engines available which make it even easier. I just do this for fun, and it would be even better if there were proper animation/particle editors available and if the engine I were to use, would be available for multiple platforms. (so more people can play my game once finished). I'd like to have it available on many mobile platforms aswell. (because I love touch input for some reason) I do know the XNA framework pretty well, but there are no good engines available for it, and it will only run on Windows, which is a huge turn off. Even bigger is, that you need to install the XNA redistributable each time you want to give the game to someone. If I use XNA, I would have to make all the tools myself, and I'd probably have to make them with WPF. (I'd love to make tools with Adobe AIR, but unfortunately the API's for image manipulation etc. are far worse in Flash, than they are in XNA/WPF.) Now, I'm aware that I could make my own engine that supports each of those platforms, but quite frankly, that would be too much work plowing through APIs. After all, I want to make a game, not an engine. So the question becomes: Is there maybe a cross platform (free or free to develop?) engine available that I could use for 2D development? I prefer: C#, Actionscript. I don't mind using c++ if the toolset is above average, but I highly doubt that there is something out there like that. Please prove me wrong :) So summary: I'd like to use Flash, but I don't know if it scales well enough. I'm not a scripter, I want some real APIs that I can work with inside a proper IDE. Just for information, I looked at several alternatives, I'm actually looking for a long time already. You'd help me a lot to make a decision finally. Feature-wise the Flatredball engine would be ideal. But I tried their tools, and quite frankly, they are horrible. Absolutely unusable, I'd need to make my own for sure. I didn't look at their API, but if their tools are so bad, I'm not inclined to look further. Unity3D. This one is quite nice, but I really don't need 3D, and it is quite ...a lot of work to learn. I also don't like that it is so expensive to use for different platforms and that I can only code for it through scripting. You have to buy each platform separately. The editor usability is average, the product overall is good enough for most purposes, but learning it myself would be overkill. Shiva 3D. It looks good enough, but again: I don't really need 3D. The editor usability is a little worse than Unity3D in my opinion and it wasn't clear to me how to start programming. I think it requires C++ for coding, so that's a negative too. I want to have fun, and c# is fun ;) SDL. Quite frankly, I'd still need to port to all those different SDL implementations. And I don't like OpenGL style programming, it's just plain ugly. And it needs c++, I know that there might be some wrappers available, but I don't like to use wrappers, because... Irrlicht. A lot of features, but support seems to be low and it is aimed at enthusiasts. C# bindings get dropped repeatedly. I'm not an engine enthusiast, I just want to make a game. I don't see this happening with Irrlicht. Ogre3D. Way too much work, it's just a graphics engine. Also no multiple platform support and c++. Torque2D. Costs something to use, and I didn't hear a lot of good things about support and documentation. Also costs extra for each platform.

    Read the article

  • Integrated ads in phone apps - how to avoid wasting battery?

    - by Jarede
    Considering the PCWorld review that came out in March: Free Android Apps Packed with Ads are Major Battery Drains ...Researchers from Purdue University in collaboration with Microsoft claim that third-party advertising in free smartphone apps can be responsible for as much as 65 percent to 75 percent of an app's energy consumption... Is there a best practice for integrating advert support into mobile applications, so as to not drain user battery too much? ...When you fire up Angry Birds on your Android phone, the researchers found that the core gaming component only consumes about 18 percent of total app energy. The biggest battery suck comes from the software powering third-party ads and analytics accounting for 45 percent of total app energy, according to the study... Has anyone invoked better ways of keeping away from the "3G Tail", as the report puts it? Is it better/possible to download a large set of adverts that are cached for a few hours, and using them to populate your ad space, to avoid constant use of the Wi-Fi/3G radios? Are there any best practices for the inclusion of adverts in mobile apps?

    Read the article

  • Right-Time Retail Part 2

    - by David Dorf
    This is part two of the three-part series. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Integration Of course these real-time enabling technologies are only as good as the systems that utilize them, and it only takes one bottleneck to slow everyone else down. What good is an immediate stock-out notification if the supply chain can’t react until tomorrow? Since being formed in 2006, Oracle Retail has been not only adding more integrations between systems, but also modernizing integrations for appropriate speed. Notice I tossed in the word “appropriate.” Not everything needs to be real-time – again, we’re talking about Right-Time Retail. The speed of data capture, analysis, and execution must be synchronized or you’re wasting effort. Unfortunately, there isn’t an enterprise-wide dial that you can crank-up for your estate. You’ll need to improve things piecemeal, with people and processes as limiting factors while choosing the appropriate types of integrations. There are three integration styles we see in the retail industry. First is batch. I know, the word “batch” just sounds slow, but this pattern is less about velocity and more about volume. When there are large amounts of data to be moved, you’ll want to use batch processes. Our technology of choice here is Oracle Data Integrator (ODI), which provides a fast version of Extract-Transform-Load (ETL). Instead of the three-step process, the load and transform steps are combined to save time. ODI is a key technology for moving data into Retail Analytics where we can apply science. Performing analytics on each sale as it occurs doesn’t make any sense, so we batch up a statistically significant amount and submit all at once. The second style is fire-and-forget. For some types of data, we want the data to arrive ASAP but immediacy is not necessary. Speed is less important than guaranteed delivery, so we use message-oriented middleware available in both Weblogic and the Oracle database. For example, Point-of-Service transactions are queued for delivery to Central Office at corporate. If the network is offline, those transactions remain in the queue and will be delivered when the network returns. Transactions cannot be lost and they must be delivered in order. (Ever tried processing a return before the sale?) To enhance the standard queues, we offer the Retail Integration Bus (RIB) to help the management and monitoring of fire-and-forget messaging in the enterprise. The third style is request-response and is most commonly implemented as Web services. This is a synchronous message where the sender waits for a response. In this situation, the volume of data is small, guaranteed delivery is not necessary, but speed is very important. Examples include the website checking inventory, a price lookup, or processing a credit card authorization. The Oracle Service Bus (OSB) typically handles the routing of such messages, and we’ve enhanced its abilities with the Retail Service Backbone (RSB). To better understand these integration patterns and where they apply within the retail enterprise, we’re providing the Retail Reference Library (RRL) at no charge to Oracle Retail customers. The library is composed of a large number of industry business processes, including those necessary to support Commerce Anywhere, as well as detailed architectural diagrams. These diagrams allow implementers to understand the systems involved in integrations and the specific data payloads. Furthermore, with our upcoming release we’ll be providing a new tool called the Retail Integration Console (RIC) that allows IT to monitor and manage integrations from a single point. Using RIC, retailers can quickly discern where integration activity is occurring, volume statistics, average response times, and errors. The dashboards provide the ability to dive down into the architecture documentation to gather information all the way down to the specific payload. Retailers that want real-time integrations will also need real-time monitoring of those integrations to ensure service-level agreements are maintained. Part 3 looks at marketing.

    Read the article

  • Translatability Guidelines for Usability Professionals

    - by ultan o'broin
    There is a clearly a demand for translatability guidelines aimed at usability professionals working in the enterprise applications space, judging by Google Analytics and the interest generated in the Twitterverse by my previous post on the subject. So let's continue the conversation. I'll flesh out each of the original points a bit more in posts over the coming weeks. Bear in mind that large-scale enterprise translation is a process. It needs to be scalable, repeatable, maintainable, and above meet the requirements of automation. That doesn't mean the user experience needs to suffer, however. So, stay tuned for some translatability best practices for usability professionals....

    Read the article

  • Why ISVs Run Applications on Oracle SuperCluster

    - by Parnian Taidi-Oracle
    Michael Palmeter, Senior Director Product Development of Oracle Engineered Systems, discusses how ISVs can easily run up to 20x faster, gain 28:1 storage compression, and grow presence in the market all without any changes to their code in this short video. One of the family of Oracle engineered systems products, Oracle SuperCluster provides maximum end-to-end database and application performance with minimal initial and ongoing support and maintenance effort, at the lowest total cost of ownership. Java or enterprise applications running on Oracle Database 11gR2 or higher and Oracle Solaris 11 can run up to 20X faster than traditional platforms on Oracle SuperCluster without any changes to their code.  Large number of customers are consolidating hundreds of their applications and databases on Oracle SuperCluster and are requiring their ISVs to support it. ISVs can become Oracle SuperCluster Ready and Oracle SuperCluster Optimized by joining the Oracle Exastack program. 

    Read the article

  • How to use PostgreSQL on AWS - Ubuntu 11.10

    - by That1Guy
    I'm extremely new to cloud-computing, Linux, and PostgreSQL, so if this is a stupid question, I apologize. I've managed to create an m1.large instance running Ubuntu 11.10, connect via Putty SSH, and install PostgreSQL (sudo apt-get install postgresql), but that is as far as I've gotten. My goal is to run several python web-scraping scripts that I've written on this instance (so as not to eat up all of our bandwidth (smaller company at the moment)) and insert the scraped data into a PostgreSQL table on the instance and later retrieve that data to store on our local server (as I've heard AWS EBS is unreliable and I don't want to take chances). How can I configure PostgreSQL on my AWS instance? How can I access the data from my machine? I currently use PgAdmin3 to manage PosgreSQL on our local server. Can I use this same interface to manage PostgreSQL on my AWS instance? Any suggestions, solutions, links, etc is greatly appreciated. And again, if this is a dumb question, I apologize.

    Read the article

  • Now Available:Oracle Utilities Customer Self Service Version 2.1

    - by Roxana Babiciu
    The Oracle Utilities Global Business Unit is pleased to announce the general availability of Oracle Utilities Customer Self Service 2.1. It is ready for customers and partners to download and install via the Oracle Software Delivery Cloud. Key Features & Benefits: Oracle Utilities Customer Self Service 2.1 includes several new capabilities and enhancements including significantly improved Commercial Account Management and Advanced Notification Management using a new Oracle Utilities Notification Center module (licensed separately). These include the following: Advanced Notification Management Online Issues and Forms Management • Budget Management and Billing for Billed Budgets Prepaid User Dashboard Enhanced Usage Details Web Presentment Start/Stop/Transfer Service Automation Payment Arrangement Automation Account Sets Management for Large Commercial Customers Multiple Account Usage Data Aggregation, Comparison, and Data Download Multiple Account Financial History Mobile Outage Maps More information can be found on OPN

    Read the article

  • File system layout for multiple build targets

    - by Yttrill
    I am seeking some ideas for how to build and install software with some parameters. These including target OS, target platform CPU details, debugging variant, etc. Some parts of the install are shared, such as documentation and many platform independent files, others are not, such as 64 and 32 bit libraries when these are separated and not together in a multi-arch library. On big networked platforms one often has multiple computers sharing some large server space, so there is actually cause to have even Windows and Unix binaries on the same disk. My product has already fixed an install philosophy of $INSTALL_ROOT/genericname/version/ so that multiple versions can coexist. The question is: how to manage the layout of all the other stuff?

    Read the article

  • About partition sizes

    - by Lassi
    I am going to install Ubuntu on a new computer, but I'm not quite sure how big each partition should be. If I create only root, home and swap partitions, on what partition will programs be installed? Will they go to /home or to root? Basically does it make sense for instance to have following partitions: / - 6GB /home - 80GB /swap - 4GB Is 6GB large enough for my root partition? Also are these 3 partitions a good choice, or is there a better configuration? I have at the moment 3 operating systems installed, and I do make changes quite often.

    Read the article

  • What is the difference between these senior software engineer titles?

    - by stackoverflowuser2010
    I'm currently a senior research software engineer at a large company and am being offered a "senior staff engineer" position somewhere else. I am not sure if the new position's title conveys a sideways move or an advancement. So, all other things being roughly equal (salary, domain of expertise, etc.), what is the external difference between these software engineer titles (in general and regardless of any particular company, if possible): senior engineer senior research engineer senior staff engineer member of technical staff principal engineer Edit: Let me elaborate on "member of technical staff" since it's kind of uncommon. I think it's a high title, commonly associated with research. I know that Oracle, VMWare, and the old Bell Labs have these titles. See: Member of Technical Staff . I know what it means, but I don't know how it stacks up against the other titles, which is why I asked.

    Read the article

  • "A good programmer can be as 10+ times more productive as a mediocre one"

    - by m3th0dman
    I had read an interview with a great programmer (it is not in English) and in it he said that "a great programmer can be as 100 times as good as a mediocre one" giving reason for why good programmers are very well paid and why programming companies give many facilities for their employees. The idea was that there is a very large demand for good programmers, because of the above reason and that's why companies pay very much to bring them. Do you agree with this statement? Do you know any objective facts that could support it? Edit: The question has nothing to do with experience; if you talk about one great programmer with 1 year experience then s/he should be 10 times more productive than a mediocre programmer with 1 year experience. I agree that from certain experience years onwards, things start to dissipate but that's not the purpose of the question.

    Read the article

  • forward sudo verification

    - by Timo Kluck
    I often use the following construct for building and installing a tarball: sudo -v && make && sudo make install which will allow me to enter my password immediately and have everything done unattended. This works well except in the rare case that building takes longer than the sudo timeout, which may happen on my rather slow machine with large projects (even when using make -j4). But when the build takes a long time, that's exactly when doing things unattended has a great advantage. Can anyone think of a shell construct that allows me to input my password immediately, and which has make executing under normal permissions and make install under elevated permissions? For security reasons, I don't want to configure my user to use sudo without password. A viable option is to set the timeout to very long, but I'm hoping for something more elegant.

    Read the article

  • What are best practices for testing programs with stochastic behavior?

    - by John Doucette
    Doing R&D work, I often find myself writing programs that have some large degree of randomness in their behavior. For example, when I work in Genetic Programming, I often write programs that generate and execute arbitrary random source code. A problem with testing such code is that bugs are often intermittent and can be very hard to reproduce. This goes beyond just setting a random seed to the same value and starting execution over. For instance, code might read a message from the kernal ring buffer, and then make conditional jumps on the message contents. Naturally, the ring buffer's state will have changed when one later attempts to reproduce the issue. Even though this behavior is a feature it can trigger other code in unexpected ways, and thus often reveals bugs that unit tests (or human testers) don't find. Are there established best practices for testing systems of this sort? If so, some references would be very helpful. If not, any other suggestions are welcome!

    Read the article

  • Deprecate a web API: Best Practices?

    - by TheLQ
    Eventually you need to depreciate parts of your public web API. However I'm confused on what would be the best way to do it. If you have a large 3rd party app base just yanking old versions of the API seems like the wrong way to do it as almost all apps would fail overnight. However you can't keep ancient web api's available forever as it might be outdated or there are significant changes that make working with it impossible. What are some best practices for deprecating old web api's?

    Read the article

  • Problem downloading .exe file from Amazon S3 with a signed URL in IE

    - by Joe Corkery
    I have a large collection of Windows exe files which are being stored/distributed using Amazon S3. We use signed URLs to control access to the files and this works great except in one case when trying to download a .exe file using Internet Explorer (version 8). It works just fine in Firefox. It also works fine if you don't use a signed URL (but that is not an option). What happens is that the IE downloader changes the name from 'myfile.exe' to 'myfile[1]' and Windows no longer recognizes it as an executable. Any advice would greatly be appreciated. Thanks.

    Read the article

  • Program to Help Order Undated Photos

    - by Richard
    I have a large number of photos which have the correct DateTimeOriginal set in EXIF. I have about 300 photos for which the DateTimeOriginal is completely wrong. The DateTimeOriginals of these photos are not correlated, so I cannot change their time en masse. It must be done individually. I'm looking for a program that would essentially allow me to drag and drop the incorrectly time stamped photos into their place in the sequence of correctly time stamped photos. It would be nice to then be able to have the DateTimeOriginal tag updated, or the photos renamed chronologically. Thanks!

    Read the article

  • dual boot install--no GRUB

    - by Jim Syyap
    My computer recently had a hardware upgrade and now runs on Windows 7. I decided to install Ubuntu 11.04 as dual boot using the ISO I got from ubuntu.com downloaded onto my USB stick. Restarting with the USB stick, I was able to install Ubuntu 11.04 choosing the option: Install Ubuntu 11.04 side by side with Windows 7 (or something like that). No errors were encountered on installation. However on restarting, there was no GRUB; the system went straight into Windows 7. Looking for answers, I found these: http://essayboard.com/2011/07/12/how-to-dual-boot-ubuntu-11-04-and-windows-7-the-traditional-way-through-grub-2/ http://ubuntuforums.org/showthread.php?t=1774523 Following their instructions, I got: Boot Info Script 0.60 from 17 May 2011 ============================= Boot Info Summary: =============================== => Windows is installed in the MBR of /dev/sda. => Syslinux MBR (3.61-4.03) is installed in the MBR of /dev/sdb. => Grub2 (v1.99) is installed in the MBR of /dev/sdc and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos7)/boot/grub on this drive. sda1: __________________________________________________ ________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /grldr /bootmgr /Boot/BCD /grldr sda2: __________________________________________________ ________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /Windows/System32/winload.exe sdb1: __________________________________________________ ________________________ File system: vfat Boot sector type: SYSLINUX 4.02 debian-20101016 ...........>...r>....... ......0...~.k...~...f...M.f.f....f..8~....>2} Boot sector info: Syslinux looks at sector 1437504 of /dev/sdb1 for its second stage. SYSLINUX is installed in the directory. The integrity check of the ADV area failed. According to the info in the boot sector, sdb1 starts at sector 0. But according to the info from fdisk, sdb1 starts at sector 62. Operating System: Boot files: /boot/grub/grub.cfg /syslinux/syslinux.cfg /ldlinux.sys sdc1: __________________________________________________ ________________________ File system: ntfs Boot sector type: Windows XP Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sdc2: __________________________________________________ ________________________ File system: Extended Partition Boot sector type: - Boot sector info: sdc5: __________________________________________________ ________________________ File system: swap Boot sector type: - Boot sector info: sdc6: __________________________________________________ ________________________ File system: swap Boot sector type: - Boot sector info: sdc7: __________________________________________________ ________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 11.04 Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sdc8: __________________________________________________ ________________________ File system: swap Boot sector type: - Boot sector info: Going back into Ubuntu and running sudo fdisk -l , I got these: ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0002f393 Device Boot Start End Blocks Id System /dev/sda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 13 19458 156185600 7 HPFS/NTFS Disk /dev/sdb: 2011 MB, 2011168768 bytes 62 heads, 62 sectors/track, 1021 cylinders Units = cylinders of 3844 * 512 = 1968128 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f2ab9 Device Boot Start End Blocks Id System /dev/sdb1 * 1 1021 1962331 c W95 FAT32 (LBA) Disk /dev/sdc: 1000.2 GB, 1000202043392 bytes 255 heads, 63 sectors/track, 121600 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00261ddd Device Boot Start End Blocks Id System /dev/sdc1 * 1 60657 487222656+ 7 HPFS/NTFS /dev/sdc2 60657 121600 489527681 5 Extended /dev/sdc5 120563 121600 8337703+ 82 Linux swap / Solaris /dev/sdc6 120073 120562 3930112 82 Linux swap / Solaris /dev/sdc7 60657 119584 473328640 83 Linux /dev/sdc8 119584 120072 3923968 82 Linux swap / Solaris Should I proceed and do the following? Assuming Ubuntu 11.04 was installed on device sdb1, do this: sudo mount /dev/sdb1 /mnt Then do this: sudo grub-install--root-directory=/mnt /dev/sdb Notice there are two dashes in front of the root directory, and I'm not using sdb1 but sdb. Since the command in step 15 had reinstalled Grub 2, now we need to unmount the /mnt (i.e. sdb1) to clean up. Do this: sudo umount /mnt Reboot and remove Ubuntu 11.04 CD/DVD from disk tray. Log into Ubuntu 11.04 (you have no choice but it will make you log into Ubuntu 11.04 at this point). Open up a terminal in Ubuntu 11.04 (using real installation, not live CD/DVD). Execute this command: sudo update-grub Reboot the machine.

    Read the article

  • Bzr to git migration

    - by Sardathrion
    I am planning to do two things on several large (several gigs) and old (several years) repositories: Move from bzr to git without losing the commit history. Restructure all the repositories either using bzr or git. This will involve moving files/directories from one repository to another with its change history. Doing both at once would be foolish (I think!) but I am not sure which one should be done first. Any suggestions? Anything I should watch out for when migrating/restructuring?

    Read the article

  • Android: Layouts and views or a single full screen custom view?

    - by futlib
    I'm developing an Android game, and I'm making it so that it can run on low end devices without GPU, so I'm using the 2D API. I have so far tried to use Android's mechanisms such as layouts and activities where possible, but I'm beginning to wonder if it's not easier to just create a single custom view (or one per activity) and do all the work there. Here's an example of how I currently do things: I'm using a layout to display the game's background as an image view and the square game area, which is a custom view, centered in the middle. What would you say? Should I continue to use layouts where possible or is it more common/reasonable to just use a large custom view? I'm thinking that this would probably also make it easier to port my code to other platforms.

    Read the article

< Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >