Search Results

Search found 11954 results on 479 pages for 'gets'.

Page 187/479 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • Blank screen after Switch User or Resume

    - by matt wilkie
    About half the time when I switch users or resume from standby or resume the screen goes blank (black). If I work the cursor keys I can hear the system bell when it gets to the end of the user list. I can also successfully login, going from memory, but screen stays black. Sometimes closing and re-opening the lid will light up the screen again. Pressing the special Function key to enable/disable external monitor connection has no effect [Fn]-[F5],[Fn]-[F6]. If none of the previous work I need to put the computer into hibernation or full power off to restore screen function. If I watch closely when switching users I think I can see the screen initially start to light up and then quickly fade to black. The computer is an Acer Aspire 3500, model ZL6, running Ubuntu 10.10 installed 2 days ago. No proprietary drivers are in use. I'll provide a list of hardware details as soon as I can figure out how to generate that (didn't there used to be an entry for hardware details under the System menu?). Possibly related questions: No resume after Hibernate or Standby When I resume from suspension - the screen is blank Switch user fails to complete successfully For what it's worth, blank after resume also used to happen occasionally when the laptop was running XP-Home, but nowhere near as often, perhaps 6 or 8 times a year. UPDATE: I found System Administration System Testing and ran the Monitor test. It went very very dark, but the window elements could be discerned, and the whole screen flashed (from very very dark to black). On the third repeat of that same test the screen went to full blaupck and stayed there. Moving the mouse, via touchpad, or touch keys did not wake it up again. I had to close the lid and put the computer into hibernate, and press the power button to restore it. UPDATE2: output of lshw: http://pastebin.com/q7n8676r, lspci: http://pastebin.com/6ujzVK4r UPDATE3: sometimes I can restore the screen by flipping to console 1 with ctrl-alt-F1 and then back to graphical with ctrl-alt-F7.

    Read the article

  • 12.10 visual performance using nvidia driver

    - by user100485
    My fresh ubuntu 12.10 install is slow, not something extreme but dragging windows, switching workspaces and things like that are just slow and look horrible. it feels like the fps is dropping in a game. Doing some photoshop work in windows was even a relief! This effect gets worse if I connect my external monitor. My system is an intel pentium dual core T4500 with 4gb memory and a GeForce 8200M G/integrated/SSE2 graphics chip. Nothing fancy but should be able to run ok. My "experience" in ubuntu is set to standard. (MSI cr500 laptop) I've installed the nvidia drivers, tried current and experimental and the experimental drivers seem to perform a bit better but overall bad anyway. I set the mode to adaptive in the nvidia-settings tool and it goes to maximum setting directly and doesn't come back. Using htop I found out that compiz or the X server always use a few percent of my cpu, more than I think it should and the time consumed is 5:18 for compiz, 4:33 for /usr/bin/X and 2:41 for google chrome(about 30 tabs open so not too strange I think.) What can I do to increase the visual performance cause this makes me not want to use ubuntu in public!

    Read the article

  • What is a legal way to use music from registered authors in a game?

    - by mm24
    I have recently asked a question about music in games like Guitar Hero. I have found that that in Europe (at least) if I do want to use a track composed by a musician member of a royalty collecting society I need to pay a flat fee to the society and not only to the member. So a "one-to-one" agreement is not valid and the society can come up to me and ask me for money for each download. Even if for FREE! This is a fee sheet list of the UK agency: for fee, see "Permanent download services" It is about 1,200 GBP for less than 22,000 copies and they DON'T specify anything more and they said me on the phone that I need to wait and see how many downloads I get before knowing the price. This is kind of crazy as If I give away the App for free I will have to PAY 1,200 GBP!! I am shocked and I feel very bad. One agency suggested me to use a fake name of the artist, but in this way is not fair to my collaborators as what they hope is that the App gets lots of downloads and in this way that other people will get to know about them and hopefully commission them more work. The other solution is to work only with non registered musicians. The question here to you is: Has anyone found a legal way to use music from registered authors in a game?

    Read the article

  • How to create per-vertex normals when reusing vertex data?

    - by Chris Smith
    I am displaying a cube using a vertex buffer object (gl.ELEMENT_ARRAY_BUFFER). This allows me to specify vertex indicies, rather than having duplicate vertexes. In the case of displaying a simple cube, this means I only need to have eight vertices total. Opposed to needing three vertices per triangle, times two triangles per face, times six faces. Sound correct so far? My question is, how do I now deal with vertex attribute data such as color, texture coordinates, and normals when reusing vertices using the vertex buffer object? If I am reusing the same vertex data in my indexed vertex buffer, how can I differentiate when vertex X is used as part of the cube's front face versus the cube's left face? In both cases I would like the surface normal and texture coordinates to be different. I understand I could average the surface normal, however I would like to render a cube. Also, this still doesn't work for texture coordinates. Is there a way to save memory using a vertex buffer object while being able to provide different vertex attribute data based on context? (Per-triangle would be idea.) Or should I just duplicate each vertex for each context in which it gets rendered. (So there is a one-to-one mapping between vertex, normal, color, etc.) Note: I'm using OpenGL ES.

    Read the article

  • Pointer position way off in Java Application menu's when using gnome-shell

    - by Hailwood
    When using any java application in gnome-shell if the window is maximised the pointer position is way off; but only on the menu's, in the editor, or the side panel, the pointer is fine. This only presents itself when the window is maximized, and it seems that the further away from 0x0 the window is when you maximise it, the bigger the pointer offset. From what I have gathered it has to do with the window not updating it's size when it gets maximised. The other issue is that when a gnome-shell notification appears, when clicking on it, I lose the ability to type in the editor, I can select text etc, but can't give it focus to type. I must bring up some other text input (e.g. right click on a file on the left, select rename, which brings up a rename dialog) after that I can type in the editor again. So, how can I fix this? Below is as much information as I can think to provide $ gnome-shell --version GNOME Shell 3.6.1 $ java -version java version "1.7.0_09" Java(TM) SE Runtime Environment (build 1.7.0_09-b05) Java HotSpot(TM) 64-Bit Server VM (build 23.5-b02, mixed mode) $ file /etc/alternatives/java /etc/alternatives/javac /etc/alternatives/java: symbolic link to '/usr/lib/jvm/java-7-oracle/jre/bin/java' /etc/alternatives/javac: symbolic link to '/usr/lib/jvm/java-7-oracle/bin/javac'

    Read the article

  • Nvidia optimus and Steam (on 12.04)

    - by Seiryuu
    I've obtained a copy of the .deb for the Steam beta, but it was pretty disappointing to see that it simply doesn't run. Hardware - Dell XPS L502, with Nvidia Optimus I have bumblebee installed. Trying to run Steam with the Intel HD 3000 completely fails to start it. Message received Installing breakpad exception handler for appid(steam)/version(1352224866_client) followed by a crash with no other information provided. Trying to optirun steam runs the client, but as soon as it gets to the home screen, it says that the Nvidia drivers I am using are out of date (and Steam requires newer drivers to run). It's probably worth to note that it throws the same Installing breakpad... error when run with optirun, but it doesn't crash the client immediately. Any way to fix this? Also, is there a way of manually updating the drivers in bumblebee without breaking anything? Alternatively, is there a reliable way of completely disabling the Intel GPU (in order to use the Nvidia GPU exclusively)? Note: I am using Xmonad with gnome-fallback, if that makes a difference. However, when I tried everything mentioned with Unity (2d), everything was the same, so I guess it has nothing to do with the window manager in use.

    Read the article

  • Does concurrency inherently introduce "randomness" into a game?

    - by Jeff
    When a game is implemented with concurrency (as most games are), does this necessarily, by its very nature, introduce an element of randomness into the game that is outside of the players' control? Note that when I use the word "random", I'm not meaning to launch into a philosophical debate about the deterministic nature of the system. I understand that concurrency is deterministic in the sense that the operating system decides which processes to allow time on the CPU and in what order (or the JVM controls which Thread's turn it is to execute, etc). But my understanding of this is that there is no way to control or predict whether one thread's next command will execute before or after another. The reason I'm asking is because this seems like a fundamental difficulty for game development where a game is supposedly designed around a player's skill. Consider a game like League of Legends. Assume that two players are battling it out. It's a very close contest between the two and it's coming down to the wire -- so much so that whoever gets their last attack off will be the one to kill the other and win the game for their team. If the players are implemented using concurrency and the situation really was like this, is it essentially out of the players' hands at this point? Is the outcome of this match all up to whatever system is arbitrarily deciding which player's thread/process will execute next? If not, what am I misunderstanding about concurrency? If so, is there any way around this problem so that a game of skill can always be a game of skill, especially in those most crucial moments?

    Read the article

  • Stuck on EULA screen when installing netflix-desktop

    - by Jim
    I am trying to install netflix-desktop on my laptop running 13.10. I followed the instructions here: sudo apt-add-repository ppa:ehoover/compolio sudo apt-get update sudo apt-get install netflix-destop After a while, there is a EULA in my xterm that says I must agree to it to get the software. At the bottom of the screen there is ' I have hit , 'A', , typed 'OK' in that xterm but it never gets past that. The application is not available because if I try to launch netflix-desktop in another xterm, it doesn't know what I'm talking about and netflix isn't found on my system when I search search. Can anybody tell me what I'm supposed to do next or what I should have done so that I'm not in this situation next time? Thanks! Solution: I found out that I had to hit the down arrow several times till the lit up. Then I could hit to go to the next step. Then by using the arrow keys to highlight the proper respose(s), I was able to complete the installation. I haven't actually brought up Netflix yet, but it appears to be installing things as expected.

    Read the article

  • Carriers Holding Your OS Updates Hostage

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/10/10/carriers-holding-your-os-updates-hostage.aspx Just a small rant here.  Today the Windows Phone 8 GDR2 update finally became available for Nokia handset users.  Now I’m not sure that it is AT&T fault entirely that Samsung and HTC users got their updates two months ago and we are just finally seeing it.  It may have something to do with the Nokia Amber update.  But every Windows Phone update on AT&T from 7.1 on seems to have been delayed.  How is it that the premiere Windows Phone carrier is always the last one to release updates? Smart phone ecosystems are a partnership between the OS provider, the hardware manufacturer and the carriers.  If any one of those partners does not hold up its responsibilities then everyone gets a black eye.  The goal for all involved should be to release updates as early as possible with reasonable assurance of stability.  This ensures the satisfaction of consumers and increases the likelihood of future sales. From what I have seen so far AT&T has been the one breaking the consumer’s trust in the Windows Phone ecosystem.  Aside from voicing our dissatisfaction we may need to start voting with our feet until they realize that they being a poor citizen has consequences. Technorati Tags: ATT,Windows Phone,Windows Phone 8,Microsoft,Nokia,GDR2

    Read the article

  • Algorithmically generating neon layers on pixel grid

    - by user190929
    In an attempt at a screensaver I am making, I am a fan of neo-like graphics, which, of course, look great against a black background. As I understand it, neon, graphically speaking, is essentially a gradient of a color, brightest in the center, and gets darker proceeding outward. Although, more accurate is similar, but separating it into tubes and glow. The tubes are mostly white, while the glow is where most of the color is seen. Well... the tubes could also be a light variant of the color, you could say. The glow is darker. Anyhow, my question is, how could you generate such things given an initial pattern of pixels that would be the tubes? For example, let's say I want to make a neon 'H'. I, via the libraries, can attain the rectangles of pixels which represent it, but I want to make it look neonized. How could I algorithmically achieve such an effect given a base tube shape and base color? EDIT: ok, I mistated that. Got a bit distracted. My purpose for this was similar to a neon effect, but not. Sorry about that. What I am looking for is something like this: Start with a pattern of pixels: [!][!][!][!][!][!][!][!] [!][!][O][!][!][!][!][!] [!][!][O][O][!][!][!][!] [!][!][!][!][O][!][!][!] [!][!][!][!][!][!][!][!] How to I find the U pixels? [!][E][E][E][!][!][!][!] [!][E][O][E][E][!][!][!] [!][E][O][O][E][E][!][!] [!][E][E][E][O][E][!][!] [!][!][!][E][E][E][!][!] Sorry if that looks bad.

    Read the article

  • Where to find and install ASUS motherboard drivers for Linux

    - by Dan
    This is my second day ever with Linux, and I had one heck of a time getting the nVidia drivers installed and working. Please, keep in mind I am very new and just starting out. I currently have an ASUS P8Z68-V LE motherboard and I'm not sure if the drivers are installed. Where would I go to find that out? I am using Gnome as my UI. If I don't have the drivers installed, where would I go? The ASUS site only gives me options to download for various Windows OS, DOS and "other" (in .ROM format). Which should I take and how should I install? I'm mostly looking for audio drivers. A lot of music I play, either on YouTube or with VLC has a faint crackling in the background on Ubuntu, which gets much worse the higher I turn the volume up. Could this be something other than the drivers? I doubt it's the hardware since the sound seems fine on Windows. I am currently running 12.04.

    Read the article

  • Math questions at a programmer interview?

    - by anon
    So I went to an interview at Samsung here in Dallas, Texas. The way the recruiter described the job, he didn't make it sound like it was too math-oriented. The job basically involved graphics programming and C++. Yes, math is implied in graphics programming, especially shaders, but I still wasn't expecting this... The whole interview lasted about an hour and a half and they asked me nothing but math-related questions. They didn't ask me a single programming question, which I found odd. About all they did was ask me how to write certain math routines as a C++ function, but that's about it. What about programming philosophy questions? Design patterns? Code-correctness? Constness? Exception safety? Thread safety? There are a zillion topics that they could have covered. But they didn't. The main concern I have is that they didn't ask any programming questions. This basically implies to me that any programmer who is good at math can get a job here, but they might put out terrible code. Of course, I think I bombed the interview because I haven't used any sort of linear algebra in about a year and I forget math easily if I haven't used it in practice for a while. Are any of my other fellow programmers out there this way? I'm a game programmer too, so this seems especially odd. The more I learn, the more old knowledge that gets "popped" out of my "stack" (memory). My question is: Does this interview seem suspicious? Is this a typical interview that large corporations have? During the interview they told me that Google's interview process is similar. They have multiple, consecutive interviews where the math problems get more advanced.

    Read the article

  • Lenovo ThinkPad L520 slows down when AC power adapter is plugged in

    - by Aamir
    I have a new laptop Lenovo ThinkPad L520 (7859-5BG) Core i5-2520M(2.5GHz) with 4GB RAM. Having installed Ubuntu 11.10 32-bit, while browsing with Chrome on GNOME classic (no effects), I noticed 173% CPU usage by chrome browser process, and the system slowly got very very slow, Now, at this stage as I removed the power adapter, the system suddenly got faster (and stopped the lagging behavior) and CPU usage drops down to 48% !! Observation 1: I was browsing through chrome when my system seemed to be seriously lagging, so I killed chrome to see if it gets any faster. But there remained no difference. Notice that CPU usage was a bit strange here. It showed no high activity, but as soon as I would click on applications in gnome panel, it would shoot CPU usage to 70, or 80 or 90 or 143% etc. depending on how quickly i clicked back and forth. At this instance I removed by AC adapter of my laptop, and suddenly system got fine. So i again clicked on gnome panel, and noticed that it now took only 7% or 12% or 13% at max, with same kind of clicks in application menu. Observation 2: At the other times, with AC adapter plugged in, top indicates four instances of chromium taking 90%, 60%, 47% and 2% (for example), and then once I take out the AC adapter same processes take lesser CPU all of a sudden Intermediate conclusions: What does this indicate ? I cannot figure out any "other" process in "top" that is suddenly being triggered, its the same process that hogs up my CPU once AC power is plugged in ! NOTE: the problem is now CONFIRMED, as i can repeat that when I have power adapter plugged in ! Can anyone tell me what exactly does this indicate ? What is wrong, is it some bug with power management or what ?

    Read the article

  • SSDT gotcha – Moving a file erases code analysis suppressions

    - by jamiet
    I discovered a little wrinkle in SSDT today that is worth knowing about if you are managing your database schemas using SSDT. In short, if a file is moved to a different folder in the project then any code analysis suppressions that reference that file will disappear from the suppression file. This makes sense if you think about it because the paths stored in the suppression file are no longer valid, but you probably won’t be aware of it until it happens to you. If you don’t know what code analysis is or you don’t know what the suppression file is then you can probably stop reading now, otherwise read on for a simple short demo. Let’s create a new project and add a stored procedure to it called sp_dummy. Naming stored procedures with a sp_ prefix is generally frowned upon and hence SSDT static code analysis will look for occurrences of this and flag them. So, the next thing we need to do is turn on static code analysis in the project properties: A subsequent build causes a code analysis warning as we might expect: Let’s suppose we actually don’t mind stored procedures with sp_ prefixes, we can just right-click on the message to suppress and get rid of it: That causes a suppression file to get created in our project: Notice that the suppression file contains a relative path to the file that has had the suppression placed upon it. Now if we simply move the file within our project to a new folder notice that the suppression that we just created gets removed from the suppression file: As I alluded above this behaviour is intuitive because the path originally stored in the suppression file is no longer relevant but you’re probably not going to be aware of it until it happens to you and messages that you thought you had suppressed start appearing again. Definitely one to be aware of. @Jamiet   

    Read the article

  • What is the best approach for inline code comments?

    - by d1egoaz
    We are doing some refactoring to a 20 years old legacy codebase, and I'm having a discussion with my colleague about the comments format in the code (plsql, java). There is no a default format for comments, but in most cases people do something like this in the comment: // date (year, year-month, yyyy-mm-dd, dd/mm/yyyy), (author id, author name, author nickname) and comment the proposed format for future and past comments that I want is: // {yyyy-mm-dd}, unique_author_company_id, comment My colleague says that we only need the comment, and must reformat all past and future comments to this format: // comment My arguments: I say for maintenance reasons, it's important to know when and who did a change (even this information is in the SCM). The code is living, and for that reason has a history. Because without the change dates it's impossible to know when a change was introduced without open the SCM tool and search in the long object history. because the author is very important, a change of authors is more credible than a change of authory Agility reasons, no need to open and navigate through the SCM tool people would be more afraid to change something that someone did 15 years ago, than something that was recently created or changed. etc. My colleague's arguments: The history is in the SCM Developers must not be aware of the history of the code directly in the code Packages gets 15k lines long and unstructured comments make these packages harder to understand What do you think is the best approach? Or do you have a better approach to solve this problem?

    Read the article

  • How best to implement HTML5 support for my validation library

    - by Vivin Paliath
    I have created an annotation-based validation library called regula. There seems to be some amount of interest around the framework and the next thing I'd like to do is to support HTML5 validation. Originally I figured that I would check to see if the browser supported the HTML5 validation that has been specified and to either emulate or delegate to built-in regula equivalents. This is trivial for things like required, but once you start getting into the date-validation, it gets tricky (date widgets, localization, etc.). So I have a few options in front of me: Full HTML5 Shim along with widgets (for date stuff etc.): I feel like this is overkill and essentially reinventing the wheel since this is already covered by things like modernizr. Use HTML5 validation if available (either native, or provided by shim; otherwise ignore): What this means is that if HTML5 validation is available (natively or through a shim) I will use it, otherwise I will ignore it. I'm leaning towards the latter since currently if someone wants to use HTML5 validation, they will most probably require a shim since not all browsers support HTML5. Which option do you think is better?

    Read the article

  • Five Holiday Gaming Tips for an Active Game Table

    - by Jason Fitzpatrick
    Getting together for the holidays represents a great oppurtunity to introduce new players to the fun of tabletop gaming. Make sure to introduce them right with these five handy tips. Courtesy of GeekDad, we find five tips for introducing new players to the fun of tabletop games old and new over the holidays. Tip number one: 1. Start short. Not everyone is ready for a multi-hour game session right after a big holiday dinner. Post-prandial drowsiness doesn’t go well with a game that takes twenty minutes to set up and another fifteen to explain, so don’t lose your audience before you get to the good stuff. Pick something speedy that gets people into the game with little downtime. If possible, get them laughing — I hear it causes the release of endorphins, which makes them feel better, which will lead to more gaming. (We’ll work on the dopamine receptors later, when you get them hooked on learning new games.) Games like Zombie Dice and Spot It! are easy to teach and can handle a pile of players. FlowerFall and Ca$h ‘n’ Gun$ are guaranteed to make people gravitate to the game table to see what’s going on. How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7

    Read the article

  • Math questions at a programmer interview?

    - by anon
    So I went to an interview at Samsung here in Dallas, Texas. The way the recruiter described the job, he didn't make it sound like it was too math-oriented. The job basically involved graphics programming and C++. Yes, math is implied in graphics programming, especially shaders, but I still wasn't expecting this... The whole interview lasted about an hour and a half and they asked me nothing but math-related questions. They didn't ask me a single programming question, which I found odd. About all they did was ask me how to write certain math routines as a C++ function, but that's about it. What about programming philosophy questions? Design patterns? Code-correctness? Constness? Exception safety? Thread safety? There are a zillion topics that they could have covered. But they didn't. The main concern I have is that they didn't ask any programming questions. This basically implies to me that any programmer who is good at math can get a job here, but they might put out terrible code. Of course, I think I bombed the interview because I haven't used any sort of linear algebra in about a year and I forget math easily if I haven't used it in practice for a while. Are any of my other fellow programmers out there this way? I'm a game programmer too, so this seems especially odd. The more I learn, the more old knowledge that gets "popped" out of my "stack" (memory). My question is: Does this interview seem suspicious? Is this a typical interview that large corporations have? During the interview they told me that Google's interview process is similar. They have multiple, consecutive interviews where the math problems get more advanced.

    Read the article

  • How can I hard reset a USB device?

    - by Cory
    I have a USB device (a modem) that is really finicky. Sometimes it works fine, but other times it refuses to connect. The only solution I have found to fix it once it gets into a bad state is to physically unplug the device and plug it back in. However, I don't always have physical access to the machine it is plugged in on, so I'm looking for a way to do this through the command line. This post suggests running: sudo modprobe -w -r usb_storage; sudo modprobe usb_storage However I get an "unknown option -w" output. This slightly modified command: sudo modprobe -r usb_storage Fails with the message FATAL: Module usb_storage is in use. If I try to kill -9 the processes marked [usb-storage] before running they refuse to die (I think because they are deeply tied to the kernel). Anyone know of a way to do this? NOTE: I cross-posted this on superuser.com as I didn't know which was more appropriate. I will delete and/or link whichever one is answered first.

    Read the article

  • SQLAuthority News – Presenting at Tech-Ed On Road – Ahmedabad – June 11, 2011 – Wait Types and Queues

    - by pinaldave
    I will be presenting in person on the subject SQL Server Wait Types and Queues at Ahmedabad on June 11, 2011. Here is the quick summary of the session. SQL Server Waits and Queues – Your Gateway to Perf. Troubleshooting Time: 11:15am – 12:15pm – June 11, 2011 Just like a horoscope, SQL Server Waits and Queues can reveal your past, explain your present and predict your future. SQL Server Performance Tuning uses the Waits and Queues as a proven method to identify the best opportunities to improve performance. A glance at Wait Types can tell where there is a bottleneck. Learn how to identify bottlenecks and potential resolutions in this fast paced, advanced performance tuning session. This session is based on my performance tuning Wait Types and Queues series. SQL SERVER – Summary of Month – Wait Type – Day 28 of 28 During the session there will be Quiz and those who gets right answer will get very interesting gifts from me. Do not miss a single minute of the event. We are also going to have two rock star speakers – Harish Vaidyanathan and Jacob Sebastian. Here is the details for the event: SQLAuthority News – Community Tech Days – TechEd on The Road – Ahmedabad – June 11, 2011 Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology

    Read the article

  • Automated Error Reporting in .NET Reflector - harnessing the most powerful test rig in existence

    - by Alex.Davies
    I know a testing system that will find more bugs than all the unit testing, integration testing, and QA you could possibly do. And the chances are you're not using it. It's called your users. It's a cliché that you should test so that you find your bugs rather than your users. Of course you should. But it's also a cliché that no software is ever shipped bug-free. Lost cause? No, opportunity! I think .NET Reflector 6 is pretty stable. In fact I know exactly how stable it is, because some (surprisingly high) proportion of its users tell me every time it crashes: If they press "Send Error Report", I get: And then I fix it. As a rough guess, while a standard stack trace is enough to fix a problem 30% of the time, having all those local variables in the stack trace means I can fix it about 80% of the time. How does this all happen? Did it take ages to code this swish system? Nope, it was one checkbox in SmartAssembly. It adds some clever code to your assembly to capture local variables every time an exception is thrown, and to ask your user to report it to you, with a variety of other useful information. Of course not all bugs show up as exceptions. But if you get used to knowing that SmartAssembly will tell you when an exception happens, you begin to change your coding style. Now, as long as an exception gets thrown in any situation you don't expect, you'll fix it if it ever happens. You'll start throwing exceptions liberally, and stop having to think about whether tiny edge cases are possible, as long as they throw an exception if they happen.

    Read the article

  • Graphic Card Installation Failure (NV 9600M GT)

    - by Georg
    I wanted to switch from 7 to Ubuntu. Now, the first thing that holds me back is the installation of the driver for my Nvidia 9600M GT. Ubuntu recommends me 4 drivers from the "Additional Driver" Section. Now, whatever driver I install, there are four things happening RANDOMLY after rebooting: The pc freezes before seeing bootup splash The pc freezes while seeing bootup splash image (ubuntu logo with the 5 dots) The pc gets into login screen and freezes instantly (Thought the unity-bar is transparent now) The pc freezes right after beeing able to login. (While this is happening the unity-bar changes from transparent to ugly-flat-style) I also tried to install the official Nvidia .run file but after exiting the X-Server, it says the driver is not compatible with the kernel and exits installation. Can somebody help me? I really wanna get rid of Windows :( But Ubuntu drives me perfectly insane. Please answer as noobish as possible. I'm an expert Windows user, but Linux is absolutely new to me. My Specs: Ubuntu 11.10 installed from USB Stick (like 100 times now :/ because I dont know how to remove the driver in recovery mode) 9600M GT 4GB Ram Intel Centrino 2 Dual Core 2.26GHz Win 7 and Ubuntu running along side. Thank you in advance, regards.

    Read the article

  • Convenience of mySQL over xml

    - by Bonechilla
    Currently I use XML to store specific information to correctly load a few things such as a list of specfied characters, scenes and music, Once more I use JAXB in combination with standard compression/decompression(ZIP) functionality to store a list of extrenous data. This data is called to add functionality to the character, somewhat like Skills in an RPG. Each skill is seperated into its own XML file with a grandlist which contains the names of each file with their extensions omitted and zipped in folder that gets encrypted. At first using xml was working fine however as the skill list grow i worry about its stability. I was wondering if I should begin storing the data in mySQL. Originally I planned to simply convert everything to JSON over xml but i think possibly mySQL would be a better move. Can anyone inform me of the key difference and pros and cons of each I guess i'm looking for the best way to store the data more conviently and would be easier to operate on. The data is mostly primatives and strings and the only arraylist of values i have i can just concat into a single field and parse later Edit: If I am going in the right direction with XML would it make sense to convert it to JSON and use maybe Kyro or EclipseLink JAXB (MOXy)

    Read the article

  • Introduction to LinqPad Driver for StreamInsight 2.1

    - by Roman Schindlauer
    We are announcing the availability of the LinqPad driver for StreamInsight 2.1. The purpose of this blog post is to offer a quick introduction into the new features that we added to the StreamInsight LinqPad driver. We’ll show you how to connect to a remote server, how to inspect the entities present of that server, how to compose on top of them and how to manage their lifetime. Installing the driver Info on how to install the driver can be found in an earlier blog post here. Establishing connections As you click on the “Add Connection” link in the left pane you will notice that now it’s possible to build the data context automatically. The new driver appears as an option in the upper list, and if you pick it you will open a connection dialog that lets you connect to a remote StreamInsight server. The connection dialog lets you specify the address of the remote server. You will notice that it’s possible to pick up the binding information from the configuration file of the LinqPad application (which is normally in the same folder as LinqPad.exe and is called LinqPad.exe.config). In order for the context to be generated you need to pick an application from the server. The control is editable hence you can create a new application if you don’t want to make changes to an existing application. If you choose a new application name you will be prompted for confirmation before this gets created. Once you click OK the connection is created and you can start issuing queries against the remote server. If there’s any connectivity error the connection is marked with a red X and you can see the error message informing you what went wrong (i.e., the remote server could not be reached etc.). The context for remote servers Let’s take a look at what happens after we are connected successfully. Every LinqPad query runs inside a context – think of it as a class that wraps all the code that you’re writing. If you’re connecting to a live server the context will contain the following: The application object itself. All entities present in this application (sources, sinks, subjects and processes). The picture below shows a snapshot of the left pane of LinqPad after a successful connection. Every entity on the server has a different icon which will allow users to figure out its purpose. You will also notice that some entities have a string in parentheses following the name. It should be interpreted as such: the first name is the name of the property of the context class and the second name is the name of the entity as it exists on the server. Not all valid entity names are valid identifier names so in cases where we had to make a transformation you see both. Note also that as you hover over the entities you get IntelliSense with their types – more on that later. Remoting is not supported As you play with the entities exposed by the context you will notice that you can’t read and write directly to/from them. If for instance you’re trying to dump the content of an entity you will get an error message telling you that in the current version remoting is not supported. This is because the entity lives on the remote server and dumping its content means reading the events produced by this entity into the local process. ObservableSource.Dump(); Will yield the following error: Reading from a remote 'System.Reactive.Linq.IQbservable`1[System.Int32]' is not supported. Use the 'Microsoft.ComplexEventProcessing.Linq.RemoteProvider.Bind' method to read from the source using a remote observer. This basically tells you that you can call the Bind() method to direct the output of this source to a sink that has to be defined on the remote machine as well. You can’t bring the results to the LinqPad window unless you write code specifically for that. Compose queries You may ask – what's the purpose of all that? After all the same information is present in the EventFlowDebugger, why bother with showing it in LinqPad? First of all, What gets exposed in LinqPad is not what you see in the debugger. In LinqPad we have a property on the context class for every entity that lives on the server. Because LinqPad offers IntelliSense we in fact have much more information about the entity, and more importantly we can compose with that entity very easily. For example, let’s say that this code creates an entity: using (var server = Server.Connect(...)) {     var a = server.CreateApplication("WhiteFish");     var src = a         .DefineObservable<int>(() => Observable.Range(0, 3))         .Deploy("ObservableSource"); If later we want to compose with the source we have to fetch it and then we can bind something to     a.GetObservable<int>("ObservableSource)").Bind(... This means that we had to know a bunch of things about this: that it’s a source, that it’s an observable, it produces a result with payload Int32 and it’s named “ObservableSource”. Only the second and last bits of information are present in the debugger, by the way. As you type in the query window you see that all the entities are present, you get IntelliSense support for them and it’s much easier to make sense of what’s available. Let’s look at a scenario where composition is plausible. With the new programming model it’s possible to create “cold” sources that are parameterized. There was a way to accomplish that even in the previous version by passing parameters to the adapters, but this time it’s much more elegant because the expression declares what parameters are required. Say that we hover the mouse over the ThrottledSource source – we will see that its type is Func<int, int, IQbservable<int>> - this in effect means that we need to pass two int parameters before we can get a source that produces events, and the type for those events is int – in the particular case of my example I had the source produce a range of integers and the two parameters were the start and end of the range. So we see how a developer can create a source that is not running yet. Then someone else (e.g. an administrator) can pass whatever parameters appropriate and run the process. Proxy Types Here’s an interesting scenario – what if someone created a source on a server but they forgot to tell you what type they used. Worse yet, they might have used an anonymous type and even though they can refer to it by name you can’t figure out how to use that type. Let’s walk through an example that shows how you can compose against types you don’t need to have the definition of. This is how we can create a source that returns an anonymous type: Application.DefineObservable(() => Observable.Range(1, 10).Select(i => new { I = i })).Deploy("O1"); Now if we refresh the connection we can see the new source named O1 appear in the list. But what’s more important is that we now have a type to work with. So we can compose a query that refers to the anonymous type. var threshold = new StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0<int>(5); var filter = from i in O1              where i > threshold              select i; filter.Deploy("O2"); You will notice that the anonymous type defined with this statement: new { I = i } can now be manipulated by a client that does not have access to it because the LinqPad driver has generated another type in its stead, named StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0. This type has all the properties and fields of the type defined on the server, except in this case we can instantiate values and use it to compose more queries. It is worth noting that the same thing works for types that are not anonymous – the test is if the LinqPad driver can resolve the type or not. If it’s not possible then a new type will be generated that approximates the type that exists on the server. Control metadata In addition to composing processes on top of the existing entities we can do other useful things. We can delete them – nothing new here as we simply access the entities through the Entities collection of the application class. Here is where having their real name in parentheses comes handy. There’s another way to find out what’s behind a property – dump its expression. The first line in the output tells us what’s the name of the entity used to build this property in the context. Runtime information So let’s create a process to see what happens. We can bind a source to a sink and run the resulting process. If you right click on the connection you can refresh it and see the process present in the list of entities. Then you can drag the process to the query window and see that you can have access to process object in the Processes collection of the application. You can then manipulate the process (delete it, read its diagnostic view etc.). Regards, The StreamInsight Team

    Read the article

  • links for 2011-02-01

    - by Bob Rhubart
    OTN Virtual Developer Day for WebLogic Server and WebLogic Developer Broadcasts (WebLogic Server) Mike Lehmann with details on a whole bunch of upcoming online events for those with an interest in WebLogic. (tags: WebLogic oracle otn) IOUC Summit: Open Arms and Cheese Shoes (Oracle Technology Network Blog (aka TechBlog)) Event highlights from OTN head honcho Justin Kestelyn. (tags: oracle otn IOUC) Prognostications for the Future of BI (BI & Analytics Pulse) Jacqueline Coolidge looks into the Business Intelligence crystal ball. (tags: oracle otn businessintelligence) Edwin Biemond: Some handy code for your managed Beans ( ADF & JSF ) "Back in 2009, I already a made a blogpost about some handy code which you can use in your ADF Web Application. You can say this blogspot is part 2 and here I will show you the code, I use most in my own managed Beans." - Oracle ACE Edwin Biemond (tags: java SOA oracle oracleace) Leon Smiers: Process, content and collaboration "Taking a look at today’s business, most companies still have a lot [to do] as far as adapting to and leveraging Web 2.0 possibilities is concerned." - Leon Smiers (tags: e20 oracle enterprise2.0) Antony Reynolds: Using the SOA-BPM VIrtualBox Appliance Antony says: "Recently I have been setting up some machines for fellow engineers. My base setup consists of Oracle Enterprise Linux with Oracle Virtual Box." (tags: oracle otn soa virtualization virtualbox bpm) Oracle Weblogic Server Gets Smart with CERN | SiliconANGLE CERN, the home to European particle physics, chose Oracle Weblogic Server to handle technical applications and copious HR and administrative Java-based web applications used by CERN employees. Oracle got its start by scheduling the interventions of the Large Hadron Collider (LHC). (tags: Weblogic oracle CERN) Oracle Virtual Developers Day: WebLogic - February 10, 2011 Virtual Developer Day: WebLogic - February 10, 2011. Speakers: Frances Zhao - Principal Prod Mngr, Java Platform Group; Will Lyons - Dir, WebLogic Server Prod Mgmt; Steven Button - Principal Prod Mngr, WebLogic Server; Pyounguk Cho - Principal Prod Mngr, Java Platform Group. (tags: oracle otn weblogic java)

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >