Search Results

Search found 13889 results on 556 pages for 'scratch memory'.

Page 240/556 | < Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >

  • The Alienware M11xR3 has arrived

    - by Enrique Lima
    A week or so ago, I mentioned my gear was evolving.  The newest member of my gear arrived yesterday, an Alienware M11xR3. Here are the specs: Intel Core i7-2617M 1.5GHz (2.6GHz Turbo Mode, 4MB Cache) NVIDIA GeForce GT540 graphics with 2.0GB Video Memory and Optimus 16GB Dual Channel DDR3 at 1333MHz 11.6in High Def (720p/1366x768) with WLED backlight 750GB 7200RPM SATA 3Gb/s Soundblaster X-Fi Hi Def Audio - Software Enabled Intel Advanced-N WiFi Link 6250 a/g/n 2x2 MIMO Technology with WiMax Gobi Mobile Broadband with GPS - supports ATT with contract Internal Bluetooth 3.0   Some pics from the unboxing event:

    Read the article

  • Recording for the JVM Diagnostics & Configuration Management sessions

    - by ablyth
    Hi All The middleware team have posted recordings of their first 2 sessions from the iDemo series they are running ( MiddlewareTechTalk Blog). Check them out below! Please download the recording from the following links. Troubleshoot Java Memory Leaks with Oracle JVM Diagnostics9 June 2011, 2:04 pm Sydney Time, 53 mins Manage WebLogic Servers by Oracle Enterprise Manager & Configuration Manager16 June 2011, 1:59 pm Sydney Time, 49 minutes Cheers Alex

    Read the article

  • My error with upgrading 4.0 to 4.2- What NOT to do...

    - by Steve Tunstall
    Last week, I was helping a client upgrade from the 2011.1.4.0 code to the newest 2011.1.4.2 code. We downloaded the 4.2 update from MOS, upload and unpacked it on both controllers, and upgraded one of the controllers in the cluster with no issues at all. As this was a brand-new system with no networking or pools made on it yet, there were not any resources to fail back and forth between the controllers. Each controller had it's own, private, management interface (igb0 and igb1) and that's it. So we took controller 1 as the passive controller and upgraded it first. The first controller came back up with no issues and was now on the 4.2 code. Great. We then did a takeover on controller 1, making it the active head (although there were no resources for it to take), and then proceeded to upgrade controller 2. Upon upgrading the second controller, we ran the health check with no issues. We then ran the update and it ran and rebooted normally. However, something strange then happened. It took longer than normal to come back up, and when it did, we got the "cluster controllers on different code" error message that one gets when the two controllers of a cluster are running different code. But we just upgraded the second controller to 4.2, so they should have been the same, right??? Going into the Maintenance-->System screen of controller 2, we saw something very strange. The "current version" was still on 4.0, and the 4.2 code was there but was in the "previous" state with the rollback icon, as if it was the OLDER code and not the newer code. I have never seen this happen before. I would have thought it was a bad 4.2 code file, but it worked just fine with controller 1, so I don't think that was it. Other than the fact the code did not update, there was nothing else going on with this system. It had no yellow lights, no errors in the Problems section, and no errors in any of the logs. It was just out of the box a few hours ago, and didn't even have a storage pool yet. So.... We deleted the 4.2 code, uploaded it from scratch, ran the health check, and ran the upgrade again. once again, it seemed to go great, rebooted, and came back up to the same issue, where it came to 4.0 instead of 4.2. See the picture below.... HERE IS WHERE I MADE A BIG MISTAKE.... I SHOULD have instantly called support and opened a Sev 2 ticket. They could have done a shared shell and gotten the correct Fishwork engineer to look at the files and the code and determine what file was messed up and fixed it. The system was up and working just fine, it was just on an older code version, not really a huge problem at all. Instead, I went ahead and clicked the "Rollback" icon, thinking that the system would rollback to the 4.2 code.   Ouch... What happened was that the system said, "Fine, I will delete the 4.0 code and boot to your 4.2 code"... Which was stupid on my part because something was wrong with the 4.2 code file here and the 4.0 was just fine.  So now the system could not boot at all, and the 4.0 code was completely missing from the system, and even a high-level Fishworks engineer could not help us. I had messed it up good. We could only get to the ILOM, and I had to re-image the system from scratch using a hard-to-get-and-use FishStick USB drive. These are tightly controlled and difficult to get, almost always handcuffed to an engineer who will drive out to re-image a system. This took another day of my client's time.  So.... If you see a "previous version" of your system code which is actually a version higher than the current version... DO NOT ROLL IT BACK.... It did not upgrade for a very good reason. In my case, after the system was re-imaged to a code level just 3 back, we once again tried the same 4.2 code update and it worked perfectly the first time and is now great and stable.  Lesson learned.  By the way, our buddy Ryan Matthews wanted to point out the best practice and supported way of performing an upgrade of an active/active ZFSSA, where both controllers are doing some of the work. These steps would not have helpped me for the above issue, but it's important to follow the correct proceedure when doing an upgrade. 1) Upload software to both controllers and wait for it to unpack 2) On controller "A" navigate to configuration/cluster and click "takeover" 3) Wait for controller "B" to finish restarting, then login to it, navigate to maintenance/system, and roll forward to the new software. 4) Wait for controller "B" to apply the update and finish rebooting 5) Login to controller "B", navigate to configuration/cluster and click "takeover" 6) Wait for controller "A" to finish restarting, then login to it, navigate to maintenance/system, and roll forward to the new software. 7) Wait for controller "A" to apply the update and finish rebooting 8) Login to controller "B", navigate to configuration/cluster and click "failback"

    Read the article

  • Procedurally generated 2d terrain for side scroller on Sega Genesis hardware?

    - by DJCouchyCouch
    I'm working on the Sega Genesis that has a 8mhz Motorola 68000 CPU. Any ideas on how to generate fast and decent 2d tile terrain for a side scroller in real time? The game would generate new columns or rows depending on the direction the player is scrolling in. The generation would have to be deterministic. The same seed value would generate the same terrain. I'm looking for algorithms that would satisfy the memory and CPU constraints of the hardware.

    Read the article

  • Force full-screen game to one monitor?

    - by Joachim Pileborg
    I have two monitors, one 1920x1200 and the other 1920x1080, and in 10.10 they were "separate". As in when I opened the display preferences they were shown as separate screens. Since installing (from scratch) 11.04 I instead have one giant 3840x1200 screen spread over the two monitors. Not a problem per se, except when I want to play full-screen games! When playing games I want them to be on the primary (1920x1200) monitor, but since the game only detects one screen I cant do that, even if I lower the resolution in-game. I have a nVidia GTS 250 card, using nvidia-current driver (version 270.41.06), even though "Additional Drivers" reports the driver is "activated but not currently in use". Is there a way to force the game to use only one of the monitors? Or make the game detect both monitors?

    Read the article

  • Could someone break this nasty habit of mine please?

    - by MimiEAM
    I recently graduated in cs and was mostly unsatisfied since I realized that I received only a basic theoretical approach in a wide range of subjects (which is what college is supposed to do but still...) . Anyway I took the habit of spending a lot of time looking for implementations of concepts and upon finding those I will used them as guides to writing my own implementation of those concepts just for fun. But now I feel like the only way I can fully understand a new concept is by trying to implement from scratch no matter how unoptimized the result may be. Anyway this behavior lead me to choose by default the hard way, that is time consuming instead of using a nicely written library until I hit my head again a huge wall and then try to find a library that works for my purpose.... Does anyone else do that and why? It seems so weird why would anyone (including me) do that ? Is it a bad practice ? and if so how can i stop doing that ?

    Read the article

  • Pragmas and exceptions

    - by Darryl Gove
    The compiler pragmas: #pragma no_side_effect(routinename) #pragma does_not_write_global_data(routinename) #pragma does_not_read_global_data(routinename) are used to tell the compiler more about the routine being called, and enable it to do a better job of optimising around the routine. If a routine does not read global data, then global data does not need to be stored to memory before the call to the routine. If the routine does not write global data, then global data does not need to be reloaded after the call. The no side effect directive indicates that the routine does no I/O, does not read or write global data, and the result only depends on the input. However, these pragmas should not be used on routines that throw exceptions. The following example indicates the problem: #include <iostream extern "C" { int exceptional(int); #pragma no_side_effect(exceptional) } int exceptional(int a) { if (a==7) { throw 7; } else { return a+1; } } int a; int c=0; class myclass { public: int routine(); }; int myclass::routine() { for(a=0; a<1000; a++) { c=exceptional(c); } return 0; } int main() { myclass f; try { f.routine(); } catch(...) { std::cout << "Something happened" << a << c << std::endl; } } The routine "exceptional" is declared as having no side effects, however it can throw an exception. The no side effects directive enables the compiler to avoid storing global data back to memory, and retrieving it after the function call, so the loop containing the call to exceptional is quite tight: $ CC -O -S test.cpp ... .L77000061: /* 0x0014 38 */ call exceptional ! params = %o0 ! Result = %o0 /* 0x0018 36 */ add %i1,1,%i1 /* 0x001c */ cmp %i1,999 /* 0x0020 */ ble,pt %icc,.L77000061 /* 0x0024 */ nop However, when the program is run the result is incorrect: $ CC -O t.cpp $ ./a.out Something happend00 If the code had worked correctly, the output would have been "Something happened77" - the exception occurs on the seventh iteration. Yet, the current code produces a message that uses the original values for the variables 'a' and 'c'. The problem is that the exception handler reads global data, and due to the no side effects directive the compiler has not updated the global data before the function call. So these pragmas should not be used on routines that have the potential to throw exceptions.

    Read the article

  • Pointers in C vs No pointers in PHP

    - by AnnaBanana
    Both languages have the same syntax. Why does C have the weird * character that denotes pointers (which is some kind of memory address of the variable contents?), when PHP doesn't have it and you can do pretty much the same things in PHP that you can do in C, without pointers? I guess the PHP compiler handles this internally, why doesn't C do the same? Doesn't this add unneeded complexity in C? For example I don't understand them :)

    Read the article

  • How to refactor my design, if it seems to require multiple inheritance?

    - by Omega
    Recently I made a question about Java classes implementing methods from two sources (kinda like multiple inheritance). However, it was pointed out that this sort of need may be a sign of a design flaw. Hence, it is probably better to address my current design rather than trying to simulate multiple inheritance. Before tackling the actual problem, some background info about a particular mechanic in this framework: It is a simple game development framework. Several components allocate some memory (like pixel data), and it is necessary to get rid of it as soon as you don't need it. Sprites are an example of this. Anyway, I decided to implement something ala Manual-Reference-Counting from Objective-C. Certain classes, like Sprites, contain an internal counter, which is increased when you call retain(), and decreased on release(). Thus the Resource abstract class was created. Any subclass of this will obtain the retain() and release() implementations for free. When its count hits 0 (nobody is using this class), it will call the destroy() method. The subclass needs only to implement destroy(). This is because I don't want to rely on the Garbage Collector to get rid of unused pixel data. Game objects are all subclasses of the Node class - which is the main construction block, as it provides info such as position, size, rotation, etc. See, two classes are used often in my game. Sprites and Labels. Ah... but wait. Sprites contain pixel data, remember? And as such, they need to extend Resource. But this, of course, can't be done. Sprites ARE nodes, hence they must subclass Node. But heck, they are resources too. Why not making Resource an interface? Because I'd have to re-implement retain() and release(). I am avoiding this in virtue of not writing the same code over and over (remember that there are multiple classes that need this memory-management system). Why not composition? Because I'd still have to implement methods in Sprite (and similar classes) that essentially call the methods of Resource. I'd still be writing the same code over and over! What is your advice in this situation, then?

    Read the article

  • What are the pro/cons of Unity3D as a choice to make games?

    - by jokoon
    We are doing our school project with Unity3d, since they were using Shiva the previous year (which seems horrible to me), and I wanted to know your point of view for this tool. Pros: multi platform, I even heard Google is going to implement it in Chrome everything you need is here scripting languages makes it a good choice for people who are not programming gurus Cons: multiplayer ? proprietary, you are totally dependent of unity and its limit and can't extend it it's less "making a game from scratch" C++ would have been a cool thing I really think this kind of tool is interesting, but is it worth it to use at school for a project that involves more than 3 programming persons ? What do we really learn in term of programming from using this kind of tool (I'm ok with python and js, but I hate C#) ? We could have use Ogre instead, even if we were learning direct x starting january...

    Read the article

  • Implementing invisible bones

    - by DeadMG
    I suddenly have the feeling that I have absolutely no idea how to implement invisible objects/bones. Right now, I use hardware instancing to store the world matrix of every bone in a vertex buffer, and then send them all to the pipeline. But when dealing with frustrum culling, or having them set to invisible by my simulation for other reasons, means that some of them will be randomly invisible. Does this mean I effectively need to re-fill the buffer from scratch every frame with only the visible unit's matrices? This seems to me like it would involve a lot of wasted bandwidth.

    Read the article

  • How can I prevent my laptop to freeze when I connect my external display?

    - by user170230
    I'm facing a problem with connecting a display on my laptop (a Toshiba mini NB 200), running Ubuntu 13.04. Processor: Intel® Atom™ CPU N270 @ 1.60GHz × 2 Memory: 993,6 MiB Graphics: Intel® 945GME x86/MMX/SSE2 OS type: 32-bit Disk: 156,3 GB After having connected my display with my laptop the first five minutes all runs perfect, however after that my laptop do not respond any more and I don't know what to do. When I disconnect the HDMI cable the screen just goes all black.

    Read the article

  • How can I convince management to deal with technical debt?

    - by Desolate Planet
    This is a question that I often ask myself when working with developers. I've worked at four companies so far and I've become aware of a lack of attention to keeping code clean and dealing with technical debt that hinders future progress in a software app. For example, the first company I worked for had written a database from scratch rather than use something like MySQL and that created hell for the team when refactoring or extending the application. I've always tried to be honest and clear with my manager when he discusses projections, but management doesn't seem interested in fixing what's already there and it's horrible to see the impact it has on team morale. What are your thoughts on the best way to tackle this problem? What I've seen is people packing up and leaving. The company then becomes a revolving door with developers coming in and out and making the code worse. How do you communicate this to management to get them interested in sorting out technical debt?

    Read the article

  • Netbook partitioning scheme suggestions

    - by David B
    I got a new Asus EEE PC 1015PEM with 2GB RAM and a 250GB HD. After playing with the netbook edition a little, I would like to install the desktop edition I'm used to. In addition to ubunto partition(s), I would like to have one separate partition for data (documents, music, etc.), so I could try other OSs in the future without losing the data. What partition scheme would you recommend? I usually like to let the installation do it by itself, but when I try to that I can only use the entire disk, so I don't get the desired data partition. I wish there was a way to see the recommended default partitioning scheme, then just tweak it a bit to fit your needs (instead of building one from scratch). So, how would you recommend I partition my HD? Please be specific since I never manually partitioned before. Thanks!

    Read the article

  • Harnessing Business Events for Predictive Decision Making - part 1 / 3

    - by Sanjeev Sharma
    Businesses have long relied on data mining to elicit patterns and forecast future demand and supply trends. Improvements in computing hardware, specifically storage and compute capacity, have significantly enhanced the ability to store and analyze mountains of data in ever shrinking time-frames. Nevertheless, the reality is that data growth is outpacing storage capacity by a factor of two and computing power is still very much bounded by Moore's Law, doubling only every 18 months.Faced with this data explosion, businesses are exploring means to develop human brain-like capabilities in their decision systems (including BI and Analytics) to make sense of the data storm, in other words business events, in real-time and respond pro-actively rather than re-actively. It is more like having a little bit of the right information just a little bit before hand than having all of the right information after the fact. To appreciate this thought better let's first understand the workings of the human brain.Neuroscience research has revealed that the human brain is predictive in nature and that talent is nothing more than exceptional predictive ability. The cerebral-cortex, part of the human brain responsible for cognition, thought, language etc., comprises of five layers. The lowest layer in the hierarchy is responsible for sensory perception i.e. discrete, detail-oriented tasks whereas each of the above layers increasingly focused on assembling higher-order conceptual models. Information flows both up and down the layered memory hierarchy. This allows the conceptual mental-models to be refined over-time through experience and repetition. Secondly, and more importantly, the top-layers are able to prime the lower layers to anticipate certain events based on the existing mental-models thereby giving the brain a predictive ability. In a way the human brain develops a "memory of the future", some sort of an anticipatory thinking which let's it predict based on occurrence of events in real-time. A higher order of predictive ability stems from being able to recognize the lack of certain events. For instance, it is one thing to recognize the beats in a music track and another to detect beats that were missed, which involves a higher order predictive ability.Existing decision systems analyze historical data to identify patterns and use statistical forecasting techniques to drive planning. They are similar to the human-brain in that they employ business rules very much like mental-models to chunk and classify information. However unlike the human brain existing decision systems are unable to evolve these rules automatically (AI still best suited for highly specific tasks) and  predict the future based on real-time business events. Mistake me not,  existing decision systems remain vital to driving long-term and broader business planning. For instance, a telco will still rely on BI and Analytics software to plan promotions and optimize inventory but tap into business events enabled predictive insight to identify specifically which customers are likely to churn and engage with them pro-actively. In the next post, i will depict the technology components that enable businesses to harness real-time events and drive predictive decision making.

    Read the article

  • multi-clients web application,should I use custom user controls or a common user control

    - by ValidfroM
    Say my company is going to build a complicated asp.net web form education system. One of the module is web based registration. To make it flexiable, we decide to use user control(ascx) with rule-engine (work flow) regulating all business logic behide them. Thus in future,for different clients, we can simply config basic existing rules or adding new rules.(Rules stored in db or XML per client). Now the question is how to deal with the user controls (ascx)? My opinion is for different client build diffrent user control from scratch. other voice is like reuse existing user controls.

    Read the article

  • Which tools helps to start Ubuntu GUI when boot?

    - by Vimal Kumar
    I am on the way to create a Live CD from scratch. I used Virtual Box for this purpose. I installed Ubuntu base from ubuntumini.iso and installed gnome-shell. And installed Remastersys and created a backup.iso. Burned in a CD and boot from a PC. It end in CLI. Not lead to GUI. I tried the same ISO in VirtualBox. But it work properly there. I think I missed some packages which help to start GUI. Can you help me to identify the packages missed to include in the CD?

    Read the article

  • My Laptop Beeps when I try to USB boot install

    - by Gino
    I tried to boot install Ubuntu using my laptop (without CD Drive) using a USB drive, then it goes to the boot selection menu (the one with the Ubuntu logo installation options). I selected Install then my laptop just beeps - 1 short beep, after that it stops then nothing installs, it just stays at the installation menu, can someone please help? Laptop Specs Neo Notebook (forgot the model version) 2GB RAM running Win XP SP3 150GB Hard Drive memory Would really appreciate if someone helped, I just used the normal 12.04 installer.

    Read the article

  • How to Choose a Web Developer to Create Your Online Template Site

    Online template systems are found online, and, typically offer you an "easy" and inexpensive way to build your website. Notice the quotations about easy. The actual process of building and maintaining the online template site might not feel like it's easy and inexpensive. The reality is that, in most cases, it really is easier to use an online template system than it is to start a new website from scratch. Also, you can get a site up much more quickly because the internal structure and background images of the site are already done. However, you should choose a developer that has some experience in this area.

    Read the article

  • Where to start building a BaaS

    - by Wesley
    I'm building a Cloud Platform, and the next phase of design involves building an extensible BaaS back end. (see http://youtu.be/lNi-05-PyEw) The reason I think we can attempt this, is there are dozens of these kinds of extensible back end data proxy's popping up almost daily at this point, which tells me the enabling technology is there to build one from scratch in a few months. I'd like to start in the right area: What kind of Dev background should I look for? What kind of tech stack should I build on? What kind of costs can I expect in terms of man-hours, etc... I know there isn't one right answer here, but I think this is the right sub to post this in, and credit will go towards to most constructive answer.

    Read the article

  • How do I install lubuntu? (kernel panic)

    - by melvincv
    Please help me install Lubuntu 12.04 i386 on an old computer. I select the "Try Lubuntu without installing" and it crashes with a kernel panic. Rarely I do get to the live OS, but soon the display goes blank. The messages log gives me '[drm] ERROR GPU hung/wedged' The specs are: Pentium 4 2.4GHz 1GB DDR RAM 40GB PATA HDD Intel 845GL chipset (8MB framebuffer, 64MB shared system memory set in the BIOS)

    Read the article

  • Lazy Initailization in .NET 4.0

    Lazy initialization or lazy instantiation means that an object is not created until it is first referenced. Lazy initialization is used to reduce wasteful computation, memory requirements. Following is an example where Lazy initialization is particularly useful.

    Read the article

  • "No root file system is defined"

    - by user169670
    Have recently installed Ubuntu 12.04.2 LTS by USB on my newly built custom pc and I have run into a problem during installation with the error saying "No root file system is defined." My pc speculations: AMD Phenom x4 955 Black Edition ASRock 960GM/U3S3 FX Micro ATX AM3+ Motherboard Mushkin Redline 8GB (2x4GB) DDR3-1866 Memory Seagate Barracuda 1.5TB 3.5" 7200RPM Internal Hard Drive XFX Radeon HD 7850 1GB Video Card XFX 550W 80 PLUS Bronze Certified ATX12V / EPS12V Power Supply Everything is new.

    Read the article

  • How to survive if you can only do things your way as a programmer?

    - by niceguyjava
    I hate hibernate, I hate spring and I am the kind of programmer who likes to do things his way. I hate micro-management and other people making decisions for me about what framework I should use, what patterns I should apply (hate patterns too) and what architecture I should design. I consider myself a successful programmer and have a descent financial situation due to my performance in past jobs, but I just can't take the standard Java jobs out there. I really love to design things from scratch and hate when I have to maintain other people's bad code, design and architecture, which is the majority you find out there for sure. Does anybody relate to that? What do you guys recommend me? Open up my on company, do consulting, or just keep looking hard until I find a job that suits my preferences, as hard as this may look like with all the hibernate and spring crap out there?

    Read the article

< Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >