Search Results

Search found 59278 results on 2372 pages for 'time estimation'.

Page 371/2372 | < Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >

  • i need two gimps

    - by truth seeking fire walker
    i am using a vintage computer for my home use and to be frank i am satisfied with it since it is what i can afford now -- i use Ubuntu derived peer Linux OS and have gimp 2.8 working good and meets my needs. but due to the vintage configuration that i have it takes a long time to load and work but i need it for my little works related with educational helps - in most of the times i don't need my tweaked up gimp with all the plug ins and extra brushes i need a faster gimp so i want to have gimp 2.6 or even 2.4 in my system along with the current one. have pinta and like softwares but to meet my needs i definitely need gimp my question is can i have two different versions of gimp at the same time which i can load from the menu itself please help me thanking you all for allowing me breath the fresh air of open source --

    Read the article

  • A Knights Tale

    - by Phil Factor
    There are so many lessons to be learned from the story of Knight Capital losing nearly half a billion dollars as a result of a deployment gone wrong. The Knight Capital Group (KCG N) was an American global financial services firm engaging in market making, electronic execution, and institutional sales and trading. According to the recent order (File No.3.15570) against Knight Capital by U.S. Securities and Exchange Commission?, Knight had, for many years used some software which broke up incoming “parent” orders into smaller “child” orders that were then transmitted to various exchanges or trading venues for execution. A tracking ‘cumulative quantity’ function counted the number of ‘child’ orders and stopped the process once the total of child orders matched the ‘parent’ and so the parent order had been completed. Back in the mists of time, some code had been added to it  which was excuted if a particular flag was set. It was called ‘power peg’ and seems to have had a similar design and purpose, but, one guesses, would have shared the same tracking function. This code had been abandoned in 2003, but never deleted. In 2005, The tracking function was moved to an earlier point in the main process. It would seem from the account that, from that point, had that flag ever been set, the old ‘Power Peg’ would have been executed like Godzilla bursting from the ice, making child orders without limit without any tracking function. It wasn’t, presumably because the software that set the flag was removed. In 2012, nearly a decade after ‘Power Peg’ was abandoned, Knight prepared a new module to their software to cope with the imminent Retail Liquidity Program (RLP) for the New York Stock Exchange. By this time, the flag had remained unused and someone made the fateful decision to reuse it, and replace the old ‘power peg’ code with this new RLP code. Had the two actions been done together in a single automated deployment, and the new deployment tested, all would have been well. It wasn’t. To quote… “Beginning on July 27, 2012, Knight deployed the new RLP code in SMARS in stages by placing it on a limited number of servers in SMARS on successive days. During the deployment of the new code, however, one of Knight’s technicians did not copy the new code to one of the eight SMARS computer servers. Knight did not have a second technician review this deployment and no one at Knight realized that the Power Peg code had not been removed from the eighth server, nor the new RLP code added. Knight had no written procedures that required such a review.” (para 15) “On August 1, Knight received orders from broker-dealers whose customers were eligible to participate in the RLP. The seven servers that received the new code processed these orders correctly. However, orders sent with the repurposed flag to the eighth server triggered the defective Power Peg code still present on that server. As a result, this server began sending child orders to certain trading centers for execution. Because the cumulative quantity function had been moved, this server continuously sent child orders, in rapid sequence, for each incoming parent order without regard to the number of share executions Knight had already received from trading centers. Although one part of Knight’s order handling system recognized that the parent orders had been filled, this information was not communicated to SMARS.” (para 16) SMARS routed millions of orders into the market over a 45-minute period, and obtained over 4 million executions in 154 stocks for more than 397 million shares. By the time that Knight stopped sending the orders, Knight had assumed a net long position in 80 stocks of approximately $3.5 billion and a net short position in 74 stocks of approximately $3.15 billion. Knight’s shares dropped more than 20% after traders saw extreme volume spikes in a number of stocks, including preferred shares of Wells Fargo (JWF) and semiconductor company Spansion (CODE). Both stocks, which see roughly 100,000 trade per day, had changed hands more than 4 million times by late morning. Ultimately, Knight lost over $460 million from this wild 45 minutes of trading. Obviously, I’m interested in all this because, at one time, I used to write trading systems for the City of London. Obviously, the US SEC is in a far better position than any of us to work out the failings of Knight’s IT department, and the report makes for painful reading. I can’t help observing, though, that even with the breathtaking mistakes all along the way, that a robust automated deployment process that was ‘all-or-nothing’, and tested from soup to nuts would have prevented the disaster. The report reads like a Greek Tragedy. All the way along one wants to shout ‘No! not that way!’ and ‘Aargh! Don’t do it!’. As the tragedy unfolds, the audience weeps for the players, trapped by a cruel fate. All application development and deployment requires defense in depth. All IT goes wrong occasionally, but if there is a culture of defensive programming throughout, the consequences are usually containable. For financial systems, these defenses are required by statute, and ignored only by the foolish. Knight’s mistakes weren’t made by just one hapless sysadmin, but were progressive errors by an  IT culture spanning at least ten years.  One can spell these out, but I think they’re obvious. One can only hope that the industry studies what happened in detail, learns from the mistakes, and draws the right conclusions.

    Read the article

  • How to ramp up my data structures skills after a long hibernation

    - by Anon
    I was pretty good with algorithms and data structures once, a long long time ago. Since then, I programmed professionally, and then went to manage a small team, which totally shot my tech skills in this field back. I've decided I want to be a developer again, and work for Google. The thing is, I'm so out of practice, that if I were to be interviewed right now I would surely flunk out in 10 minutes. What training program would you recommend for me to get back into shape? I already started this weekend by going back to the absolute basics and implementing a few sort algorithms, linked list, and hash table. Next, I think I'll read through the entire course material on the other basic data structures and graph algorithms. I want to find a focused set of practical exercises I can do in a relatively short amount of time, to juggle the old brain cells. I know this stuff - I just need to remind myself that I know it.

    Read the article

  • Right mix of planning and programing on a new project

    - by WarrenFaith
    I am about to start a new project (a game, but thats unimportant). The basic idea is in my head but not all the details. I don't want to start programming without planning, but I am seriously fighting my urge to just do it. I want some planning before to prevent refactoring the whole app just because a new feature I could think of requires it. On the other hand, I don't want to plan multiple months (spare time) and start that because I have some fear that I will lose my motivation in this time. What I am looking for is a way of combining both without one dominating the other. Should I realize the project in the way of scrum? Should I creating user stories and then realize them? Should I work feature driven? (I have some experience in scrum and the classic "specification to code" way.)

    Read the article

  • How to become a good team player?

    - by Nick
    I've been programming (obsessively) since I was 12. I am fairly knowledgeable across the spectrum of languages out there, from assembly, to C++, to Javascript, to Haskell, Lisp, and Qi. But all of my projects have been by myself. I got my degree in chemical engineering, not CS or computer engineering, but for the first time this fall I'll be working on a large programming project with other people, and I have no clue how to prepare. I've been using Windows all of my life, but this project is going to be very unix-y, so I purchased a Mac recently in the hopes of familiarizing myself with the environment. I was fortunate to participate in a hackathon with some friends this past year -- both CS majors -- and excitingly enough, we won. But I realized as I worked with them that their workflow was very different from mine. They used Git for version control. I had never used it at the time, but I've since learned all that I can about it. They also used a lot of frameworks and libraries. I had to learn what Rails was pretty much overnight for the hackathon (on the other hand, they didn't know what lexical scoping or closures were). All of our code worked well, but they didn't understand mine, and I didn't understand theirs. I hear references to things that real programmers do on a daily basis -- unit testing, code reviews, but I only have the vaguest sense of what these are. I normally don't have many bugs in my little projects, so I have never needed a bug tracking system or tests for them. And the last thing is that it takes me a long time to understand other people's code. Variable naming conventions (that vary with each new language) are difficult (__mzkwpSomRidicAbbrev), and I find the loose coupling difficult. That's not to say I don't loosely couple things -- I think I'm quite good at it for my own work, but when I download something like the Linux kernel or the Chromium source code to look at it, I spend hours trying to figure out how all of these oddly named directories and files connect. It's a programming sin to reinvent the wheel, but I often find it's just quicker to write up the functionality myself than to spend hours dissecting some library. Obviously, people who do this for a living don't have these problems, and I'll need to get to that point myself. Question: What are some steps that I can take to begin "integrating" with everyone else? Thanks!

    Read the article

  • Windows 7: Search indexing is stuck

    - by Ricket
    When I open Indexing Options, it says: 4,317 items indexed Indexing in progress. Search results might not be complete during this time. It's stuck at 4,317 though; no more items have been indexed. Worst of all, SearchIndexer.exe is taking up 100% CPU (well, 50%, but I have a dual core CPU; it's taking up all processing power it can). It is not causing hard drive activity though. I tried clicking "Troubleshoot search and indexing" at the bottom of the Indexing Options window, but it couldn't find any problem. I've also tried the repair registry key that several websites suggest; I change HKLM\SOFTWARE\Microsoft\Windows Search SetupCompletedSuccessfully to 0 and restarted the computer, and it apparently repaired because it flipped back to 1, but the same problem continues to occur. It's reducing the battery life of my laptop and making it really hot so that my fans are running all the time. I've had to disable the Windows Search service. How can I fix this? Do I need to just flat-out reformat my computer? Update: I've tried rebuilding a couple times. There's nothing unusual about the locations I have to index, and I don't have any downloads in progress or anything like that. I don't see any reason why it stopped, and I noticed it much too late to do a system restore. At this point, I'm hoping someone will offer up some secret answer that will fix the problem, thus the bounty. Another update: I tried starting the service again, just to let it try yet again. It seemed okay at first (Indexing Options showed it operating at reduced speed due to user activity, and the number of files was going up). A while later I checked, and the service had stopped. Event viewer revealed some errors like this: Log Name: Application Source: Application Error Date: 2/1/2010 7:34:23 PM Event ID: 1000 Task Category: (100) Level: Error Keywords: Classic User: N/A Computer: ricky-win7 Description: Faulting application name: SearchIndexer.exe, version: 7.0.7600.16385, time stamp: 0x4a5bcdd0 Faulting module name: NLSData0007.dll, version: 6.1.7600.16385, time stamp: 0x4a5bda88 Exception code: 0xc0000005 Fault offset: 0x002141ba Faulting process id: 0x13a0 Faulting application start time: 0x01caa39f2a70ec02 Faulting application path: C:\Windows\system32\SearchIndexer.exe Faulting module path: C:\Windows\System32\NLSData0007.dll Report Id: b4f7a7ae-0f92-11df-87fc-e5d65d8794c2 Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Application Error" /> <EventID Qualifiers="0">1000</EventID> <Level>2</Level> <Task>100</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2010-02-02T00:34:23.000000000Z" /> <EventRecordID>10689</EventRecordID> <Channel>Application</Channel> <Computer>ricky-win7</Computer> <Security /> </System> <EventData> <Data>SearchIndexer.exe</Data> <Data>7.0.7600.16385</Data> <Data>4a5bcdd0</Data> <Data>NLSData0007.dll</Data> <Data>6.1.7600.16385</Data> <Data>4a5bda88</Data> <Data>c0000005</Data> <Data>002141ba</Data> <Data>13a0</Data> <Data>01caa39f2a70ec02</Data> <Data>C:\Windows\system32\SearchIndexer.exe</Data> <Data>C:\Windows\System32\NLSData0007.dll</Data> <Data>b4f7a7ae-0f92-11df-87fc-e5d65d8794c2</Data> </EventData> </Event> If you are having the same error and arrived here from a Google search, please comment or add an answer detailing your progress on this, if any...

    Read the article

  • Skype does not save user configuration

    - by varsketiz
    On Ubuntu 10.10, I have recently started to experience this problem: For some reason Skype won't save any settings except "Sign in on startup". When Skype starts and tries to sign in it is unsuccessful (it shows incorrect password, in red) every time (I have provided the correct password). I always have to click to show contacts in groups, it does not remember it. Every time I have to go to options and update some notification settings. I know all these things should be "remembered" by Skype - this problem started only recently. I don't recall fiddling with any permissions that could have likely caused that. Do you know what the problem might be? I uninstalled (marking for complete configuration removal) and re-installed Skype, but it still remembers my username (why??). Can I find Skype configuration files on the filesystem somewhere and change permissions for them - or even better - edit the files to set what I want?

    Read the article

  • Oracle Communications Calendar Server: Upgrading to Version 7 Update 3

    - by joesciallo
    It's been some time since I have posted an entry. Now, with the release of Oracle Communications Calendar Server 7 Update 3, it seems high time to jump start this blog again. To begin with, check out what's new in this release: Authenticating Against an External Directory Booking Window for Calendars Changes to the davadmin Command Enable and Disable Account Autocreation LDAP Pools New Configuration Parameters New Languages New populate-davuniqueid Utility New Schema Objects Non-active Calendar Accounts Are No Longer Searched or Fetched Remote Document Store Authentication The upgrade is a bit more complicated than normal, as you must first apply some new schema elements to your Directory Server(s). To do so, you need to get the comm_dssetup 6.4 patch, patch the comm_dssetup script, and then run the patched comm_dssetup against your Directory Server(s) instances. In addition, if you are using the nsUniqueId attribute as your deployment's unique identifier, you'll want to change that to the new davUniqueId attribute. Consult the Upgrade Procedure for details, as well as DaBrain's blog, before forging ahead with this upgrade. Additional quick links: Problems Fixed in This Release Known Issues Calendar Server Unique Identifier Changes to the davadmin command Get the Calendar Server patch Get the comm_dssetup patch

    Read the article

  • EaseFunction in LoopEntityModifier

    - by Siddharth
    For my game, I need EaseFunction in LoopEntityModifier. In my game, I am rotating ball over certain object. For giving effect I want to use EaseFunction. I want to rotate ball around an object take around 4 to 5 round that was already rotating but I want add some effect so that it looks good. For this I have to use EaseFunction which suits my needs. But if I put EaseFunction in rotation modifier then each round rotation modifier apply an effect of EaseFunction that I want only one time occur either starting or ending time. So if I can able to provide EaseFunction in LoopEntityModifier then it will good for me or something similar also work for me. At present my code is something similar like this. new LoopEntityModifier(new RotationModifier(...)); I hope someone has some idea on this.

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • What steps to take in resolving/fixing/optimizing a long boot, with possible looping errors as the culprit

    - by Tchalvak
    So my boot time has been slowing and slowing as time has gone on... I am running a number of services (e.g. apache/mysql, postgresql), but it has seen a drastic slowing lately, while I've only been applying updates as normal. I happened to check out my /var/log/boot.log and it is spammed with many lines of this: init: upstart-udev-bridge main process (2738) terminated with status 1 init: upstart-udev-bridge main process ended, respawning I wasn't able to find any solutions to that issue in google, or much talk of it at all, and I'm not really certain that error is the problem, but it is the only lead that I have. What steps should I go through to diagnose boot problems/a slow bootup?

    Read the article

  • Skills to Focus on to land Big 5 Software Engineer Position

    - by Megadeth.Metallica
    Guys, I'm in my penultimate quarter of grad school and have a software engineering internship lined up at a big 5 tech company. I have dabbled a lot recently in Python and am average at Java. I want to prepare myself for coding interviews when I apply for new grad positions at the Big 5 tech companies when I graduate at the end of this year. Since I want to have a good shot at all 5 companies (Amazon,Google,Yahoo,Microsoft and Apple) - Should I focus my time and effort on mastering and improving my Java. Or is my time better spent checking out other languages and tools ( Attracted to RoR, Clojure, Git, C# ) I am planning to spend my spring break implementing all the common algorithms and Data structure out of my algorithms textbook in Java.

    Read the article

  • High I/O wait after login

    - by Jackson Tan
    I've noticed that the ubuntuone-syncdaemon hogs up the hard disk every time I log in to Ubuntu (10.04). This takes up to two or three minutes, which makes Ubuntu insufferably slow. Opening Firefox is okay, but the browser is constantly greyed out and lags horribly. Given that I often shut down my laptop when I don't use it (about 3 to 4 times a day), this makes Ubuntu lose much of its lustre because of its long boot time. Is this a normal behaviour of Ubuntu One? Is it intended? Note that I've actually posted this in the forums here, but I received only few replies.

    Read the article

  • How can I use WebGL to create a tile-based multi-layer scrolling platform game?

    - by Nicholas Hill
    I've found WebGL (based on OpenGL) to be a fiendish and unforgiving framework for those learning to write HTML5-based games. Despite the presence of many examples on how to get started, I'm really struggling to understand how I could simply load a bunch of images and render them to a canvas quickly using WebGL. My specific scenario involves trying to render a map using a bespoke but simple multi-layered tile engine, where each value in a three dimensional array points to the image to use for that location in the rendered image. Think "Sonic the Hedgehog" via tilesets, tiles, maps, layers, sprites etc. Can anyone enlighten me: 1) How can I load an image that I can use as a texture in WebGL? 2) How can I dynamically select an image at run time and draw it at any co-ordinate, that I also select at run time?

    Read the article

  • Audio playback: part of song is skipped

    - by Homulvas
    I am experiencing some problems with music playback after upgrading to Ubuntu 12.10. Basically some of the songs stop playing after some time as if the song has ended. It's always the same songs and the same time. The weird thing that it happens with Clementine and Totem but VLC doesn't have this problem and it also plays as it should on Windows. I'm guessing there might be a problem with some library that's shared with by the first two applications. I don't know if it's relevant but the file format of the audio files is flac(don't know if the problem affects mp3, because I don't have many of them).

    Read the article

  • Problem animating in Unity/Orthello 2D. Can't move gameObject

    - by Nelson Gregório
    I have a enemy npc that moves left and right in a corridor. It's animated with 2 sprites using Orthello 2D Framework. If I untick the animation's play on start and looping, the npc moves correctly. If I turn it on, the npc tries to move but is pulled back to his starting position again and again because of the animation loop. If I turn looping off during runtime, the npc moves correctly again. What did I do wrong? Here's the npc code if needed. using UnityEngine; using System.Collections; public class Enemies : MonoBehaviour { private Vector2 movement; public float moveSpeed = 200; public bool started = true; public bool blockedRight = false; public bool blockedLeft = false; public GameObject BorderL; public GameObject BorderR; void Update () { if (gameObject.transform.position.x < BorderL.transform.position.x) { started = false; blockedRight = false; blockedLeft = true; } if (gameObject.transform.position.x > BorderR.transform.position.x) { started = false; blockedLeft = false; blockedRight = true; } if(started) { movement = new Vector2(1, 0f); movement *= Time.deltaTime*moveSpeed; gameObject.transform.Translate(movement.x,movement.y, 0f); } if(!blockedRight && !started && blockedLeft) { movement = new Vector2(1, 0f); movement *= Time.deltaTime*moveSpeed; gameObject.transform.Translate(movement.x,movement.y, 0f); } if(!blockedLeft && !started && blockedRight) { movement = new Vector2(-1, 0f); movement *= Time.deltaTime*moveSpeed; gameObject.transform.Translate(movement.x,movement.y, 0f); } } }

    Read the article

  • Slerping rotation mirrors

    - by Esa
    I rotate my game character to watch at the target using the following code: transform.rotation = Quaternion.Slerp(startQuaternion, lookQuaternion, turningNormalizer*turningSpeed/10f) startQuaternion is the character's current rotation when a new target is given. lookQuaternion is the direction the character should look at and it's set like this: destinationVector = currentWaypoint.transform.position - transform.position; lookQuaternion = Quaternion.LookRotation(destinationVector, Vector3.up); turningNormalizer is just Time.deltaTime incremented and turningSpeed is a static value given in the editor. The problem is that while the character turns as it should most of the time, it has problems when it has to do close to 180 degrees. Then it at times jitters and mirrors the rotation: In this poorly drawn image the character(on the right) starts to turn towards the circle on the left. Instead of just turning either through left or right it starts this "mirror dance": It starts to rotate towards the new facing Then it suddenly snaps to the same angle but on other side and keeps rotating It does this "mirroring" so long until it looks at the target. Is this a thing with quaternions, slerping/lerping or something else?

    Read the article

  • Has Microsoft stopped offering the free Internet Explorer Application Compatibility VPC Image for IE 6 testing?

    - by Paul D. Waite
    For some time now, Microsoft has made available free, stripped-down, time-limited Virtual PC images for testing web apps in older versions of IE. The most recent version is here: http://www.microsoft.com/download/en/details.aspx?id=11575 But the XP VPC image has now expired (14th Aug 2011), meaning one can no longer test IE 6 using this method. Have Microsoft made updated XP VPC images available? If not, have they commented on the situation? Do they provide any alternative method to test web apps in IE 6? Update As noted by @PleaseStand, as of 16th Aug 2011, Microsoft has made updated images available that expire on 17th November 2011.

    Read the article

  • Vendors: Partners or Salespeople?

    - by BuckWoody
    I got a great e-mail from a friend that asked about how he could foster a better relationship with his vendors. So many times when you work with a vendor it’s more of a used-car sales experience than a partnership – but you can actually make your vendor more of a partner, as long as you both set some ground-rules at the start. Sit down with your vendor, and have a heart-to-heart talk with them, explain that they won’t win every time, but that you’re willing to work with them in an honest way on both sides. Here’s the advice I sent him verbatim. I hope this post generates lots of comments from both customers and vendors. I don’t expect that you’ve had a great experience with your Microsoft reps, but I happen to work with some of the best sales teams in the business, and our clients tell us that all the time. “The key to this relationship is to keep the audience really small. Ideally there should be one person from your side that is responsible for the relationship, and one from the vendor’s side. Each responsible person should have the authority to make decisions, and to bring in other folks as needed for a given topic, project or decision.   For Microsoft, this is called an “Account Manager” – they aren’t technical, they aren’t sales. They “own” a relationship with a company. They learn what the company does, who does it, and how. They are responsible to understand what the challenges in your company are. While they don’t know the bits and bytes of everything we sell, they know what each thing does, and who to talk to about it. I get a call from an Account Manager every week that has pre-digested an issue at an organization and says to me: “I need you to set up an architectural meeting with their technical staff to get a better read on how we can help with problem X.” I do that and then report back to the Account Manager what we learned.  All through this process there’s the atmosphere of a “team”, not a “sales opportunity” per se. I’ve even recommended that the firm use a rival product, and I’ve never gotten push-back on that decision from my Account Managers.   But that brings up an interesting point. Someone pays an Account Manager and pays me. They expect something in return. At some point, you have to buy something. Not every time, not every situation – sometimes it’s just helping you with what you already bought from us. But the point is that you can’t expect lots of love and never spend any money. That’s the way business works.   Finally, don’t view the vendor as someone with their hand in your pocket – somebody that’s just trying to sell you something and doesn’t care if they ever see you again – unless they deserve it. There are plenty of “love them and leave them” companies out there, and you may have even had this experience with us, but that isn’t the case in the firms I work with. In fact, my customers get a questionnaire that asks them that exact question. “How many times have you seen your account team? Did you like your interaction with them? Can they do better?” My raises, performance reviews and general standing in my group are based on the answers the company gives.  Ask your vendor if they measure their sales and support teams this way – if not, seek another vendor to partner with.   Partnering with someone is a big deal. It involves time and effort on your part, and on the vendor’s part. If either of you isn’t pulling your weight, it just won’t work. You have every right to expect them to treat you as a partner, and they have the same right for your side.” Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Kicked out when logged in?

    - by acidzombie24
    When i log into ubuntu the screen goes black as it were logging in but then i am back in the login screen. I can login as guest (i should disable that). I can login via ssh. The last time i had no issues was when i used it this morning and the only unusual thing i did was shutdown via ssh (i think i wrote shutdown now) which had ubuntu hang after showing the shuting down screen for many seconds (i kind of assumed it shut down properly since the UI froze as well and it usually takes less time). Whats the easiest way to fix this w/o reinstalling? Its a brand new install and so it probably takes me 10mins to recopy/setup everything so i may just do that -edit- using 12.04

    Read the article

  • Distance Education in Computer Science - HCI

    - by Rionmonster
    I have been a software engineer / graphic designer for a few years and have recently been considering furthering my education in the field. (It was actually a very generous Christmas present) I would primarily be interested in something like Human Computer Interaction or a similar "creative technology" that involves heavy UI/UX Design, prototyping or Information Architecture. Anyways - I still plan on working full-time and was looking into part-time distance programs and was wondering if anyone had some experience with pursuing a similar degree (either from a distance or in-person) and could share their experiences. Thanks!

    Read the article

  • GDD-BR 2010 [1C] Google Web Tookit: What it is, How it Works and Deeper Dives

    GDD-BR 2010 [1C] Google Web Tookit: What it is, How it Works and Deeper Dives Speaker: Chris Ramsdale Track: Cloud Computing Time slot: C [12:05 - 12:50] Room: 1 Level: 151 If you're like the rest of us, at some point in your web app development you've wondered if there was an easier solution. One that includes built-in debuggers, code refactoring, reliable syntax highlighting, etc. After all, why should the server-side and desktop programmers get all of the good tools? The good news is that with Google Web Toolkit (GWT) you do have access to these tools. And in this session, Chris Ramsdale will get you up and running with GWT, including what it is, how it works, and deeper dives into generators, native Javascript interop, and compiler optimizations. From: GoogleDevelopers Views: 1 0 ratings Time: 35:02 More in Science & Technology

    Read the article

  • Techniques and methods to improve and speedup development

    - by Jlouro
    I just got a new project. It’s a WinForm app. And it’s interesting because I will have the possibility to learn how fish farms work and are managed. I will have to develop touch interfaces, and use datasnap to communicate and sync data between farm and office. The development will be done in DELPHI win32, still not sure about the database, but maybe SQLlite. So now comes the difficult part of the project, building it. I am just finishing one. It took me a lot of time to do it and I am a bit “burned out”. I will have time to relax, not much, but a few days. What I am looking is some tips and tricks and possibly tools to be able to speed up the development. It must be up and running in 2 months. So what tricks do you use to speed up things?

    Read the article

< Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >