Search Results

Search found 7821 results on 313 pages for 'high dpi'.

Page 176/313 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • What can I do about rsync of large files killing my laptop's wifi connection

    - by David Dean
    When I run a rsync to backup my home folder over the network like so: rsync -avhz --progress --delete /home/dbdean/ [email protected]:/home/backups/david/ I seem to have problems with my quite large .VirtualBox/HardDisks/Windows XP.vdi file. Occasionally the wifi will silently fail (the transfer stops, and any other network access is broken). If I reconnect the wifi to my network before the transfer times out, it happily keeps going (and other network access is back), but I can't just leave it unattended most of the time, as I have to keep an eye on it. I'm guessing this is probably a bug in the wireless card related to a particularly high sustained volume of network usage, but I'm not really sure where to start with diagnosing this problem so that I can provide a good bug report. Or it could be something else, I guess. Any help would be appreciated. My network card is an Atheros Communications Inc. AR9285, as lspci -k shows: 43:00.0 Network controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) (rev 01) Subsystem: Hewlett-Packard Company Device 3040 Kernel driver in use: ath9k Kernel modules: ath9k

    Read the article

  • EVENT RECAP: Oracle Health Sciences Conference

    - by cwarticki
    Monaco served as an intense location for this year's Oracle Health Sciences User Group conference.  It was a "Grand Prix" event with nearly 200 attendees from all over the world.  In a country famous for high performance race cars, luxury super yachts and lifestyles of the rich & famous, the conference was very Ellison-esque. I think the Superyachts were being paired with Exadata. The OSHUG staff were fantastic . Robin and Taylor (pictured left) from Drohan Management took care of all the details and were wonderful to get to know. I met with some real Oracle loyalists.  Stan Sachar,  I.T. Manager for Westat, and the Focus Group co-chair for Admin Configuration Mgmt (ACM).  Westat was an early adopter of Oracle Clinical for clinical trial projects with installations in 1997-98.  I had a chance to talk with Stan during the reception and he is an Oracle advocate and evangelist. He's invested in his career in using Oracle products. (Stan Sachar pictured right with Dick Wolnick from Oracle, on left) I also met with Mirco Becker from Grunenthal Gmbh.  He's been working with the Argus product for over 6 years.  He's a big user of Oracle Support. Mirco attended my support best practices session and was actively engaged and asked several questions.  He's excited to adopt those best practices and work more efficiently and effectively with Support. Finally, I thank the many who attended my session.  I admit, the beautiful weather and view of the ocean was a distraction, but nonetheless my mission was to provide you with all the necessary support resources for Health Sciences users. You will find a copy of my presentation on the OSHUG website. Bon Voyage Monaco.  Thanks for the memories.  I'll see everyone next year, in Miami. -Chris WartickiGlobal Customer Management

    Read the article

  • Data structure for pattern matching.

    - by alvonellos
    Let's say you have an input file with many entries like these: date, ticker, open, high, low, close, <and some other values> And you want to execute a pattern matching routine on the entries(rows) in that file, using a candlestick pattern, for example. (See, Doji) And that pattern can appear on any uniform time interval (let t = 1s, 5s, 10s, 1d, 7d, 2w, 2y, and so on...). Say a pattern matching routine can take an arbitrary number of rows to perform an analysis and contain an arbitrary number of subpatterns. In other words, some patterns may require 4 entries to operate on. Say also that the routine (may) later have to find and classify extrema (local and global maxima and minima as well as inflection points) for the ticker over a closed interval, for example, you could say that a cubic function (x^3) has the extrema on the interval [-1, 1]. (See link) What would be the most natural choice in terms of a data structure? What about an interface that conforms a Ticker object containing one row of data to a collection of Ticker so that an arbitrary pattern can be applied to the data. What's the first thing that comes to mind? I chose a doubly-linked circular linked list that has the following methods: push_front() push_back() pop_front() pop_back() [] //overloaded, can be used with negative parameters But that data structure seems very clumsy, since so much pushing and popping is going on, I have to make a deep copy of the data structure before running an analysis on it. So, I don't know if I made my question very clear -- but the main points are: What kind of data structures should be considered when analyzing sequential data points to conform to a pattern that does NOT require random access? What kind of data structures should be considered when classifying extrema of a set of data points?

    Read the article

  • State of the art Culling and Batching techniques in rendering

    - by Kristian Skarseth
    I'm currently working with upgrading and restructuring an OpenGL render engine. The engine is used for visualising large scenes of architectural data (buildings with interior), and the amount of objects can become rather large. As is the case with any building, there is a lot of occluded objects within walls, and you naturally only see the objects that are in the same room as you, or the exterior if you are on the outside. This leaves a large number of objects that should be occluded through occlusion culling and frustum culling. At the same time there is a lot of repetative geometry that can be batched in renderbatches, and also a lot of objects that can be rendered with instanced rendering. The way I see it, it can be difficult to combine renderbatching and culling in an optimal fashion. If you batch too many objects in the same VBO it's difficult to cull the objects on the CPU in order to skip rendering that batch. At the same time if you skip the culling on the cpu, a lot of objects will be processed by the GPU while they are not visible. If you skip batching copletely in order to more easily cull on the CPU, there will be an unwanted high amount of render calls. I have done some research into existing techniques and theories as to how these problems are solved in modern graphics, but I have not been able to find any concrete solution. An idea a colleague and me came up with was restricting batches to objects relatively close to eachother e.g all chairs in a room or within a radius of n meeters. This could be simplified and optimized through use of oct-trees. Does anyone have any pointers to techniques used for scene managment, culling, batching etc in state of the art modern graphics engines?

    Read the article

  • Oracle's SPARC T4, 007 Style

    - by Kristin Rose
    The names 4, T4, and this power house travels hand in hand with its good friend SPARC. About 6 years ago on-chip encryption acceleration was first shipped in a commercial system, the SPARC T1. Today, thanks to Oracle SPARC innovative leadership in on-chip encryption acceleration, complex cryptographic computations was born and has since rapidly evolved. Customers can now have security with performance because we my friend, are in the Age of Big Data.If you need some high speed action in your life, listen here. The SPARC T4 systems offer customers much more value for applications than just increased performance through its cross sell opportunity. This is done by enabling partners to integrate your own applications to Oracle’s SPARC T4 Servers for Cloud deployments, and providing direct business benefits that supersedes the commodity approach to data center computing such as security, performance and optimization.As companies continue down this complex path of big data, eCommerce, and mobility, the need to provide better and more in-depth security is more prominent than ever. Oracle’s SPARC T4 processor allows customers to deliver the highest levels of application security, as well as deliver the necessary level performance without added cost, and complexity.To learn more behind the value of SPARC T4, check out a more in-depth blog here. For more on the SPARC T4 family of products, click here.Encryption Lives Another Day,The OPN Communications Team Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • Resolution independence - resize on the fly or ship all sizes?

    - by RecursiveCall
    My game relies heavily on textures of various sizes with some being full-screen. The game is targeted for multiple resolutions. I found that resizing textures (downsizing) works quite well for this game’s art type (it’s not Pixel Art or anything like that). I asked my artist to ensure that all textures at the edges of the screen to be created in such a way that they can safely “overflow” off screen; this means that aspect ratio is not an issue. So with no aspect ratio issues, I figured that I would simply ask my artist to create assets in very high resolution, and then resize them down to the appropriate screen resolution. The question is, when and how do I do that? Do I pre-resize everything to common resolutions in Photoshop and package all assets in the final product (increasing the size download that the user has to deal with) and then select the appropriate asset based on the detected resolution? Or do I ship with the largest set of Textures, detect the resolution on load, set a render target and draw all downsized assets to it and use that? Or for the latter, do I use some sort of a CPU-sided algorithm to resize on game load?

    Read the article

  • links for 2010-06-01

    - by Bob Rhubart
    Venkatakrishnan J: Oracle BI EE 10.1.3.4.1 -- Do we need measures in a Fact Table? Troubleshooting from Rittman Mead's Venkatakrishnan J. (tags: oracle otn businessintelligence datawarehouse) Grid container support : JavaFX Composer An overview how JavaFX Composer supports the grid container. (tags: oracle sun javafx) John Brunswick: Site Studio Mobile Example - WCM Reuse The example highlighted in John Brunswick's post takes advantage of dynamic conversion capabilities in Oracle UCM that allow site content to be created and updated via MS Office documents.  (tags: oracle otn enterprise2.0) @glassfish: GlassFish 3 in the EC2 Cloud powering Dutch and Belgian community polls "The infrastructure is Amazon's Elastic Cloud Computing (EC2) environment because of the dynamic provisioning (elasticity) required by such an online service. Requests are handled directly by the grizzly layer of GlassFish with no extra front-end HTTP layer and shows great performance and scalability." -- The Aquarium (tags: oracle java sun glassfish cloud) James Morle: Flash Storage Will Be Cheap: The End of the World is Nigh "We now need technologies that look more like Oracle Exadata v2, with low-latency RDMA interfaces directly into the Operating System/Database. However, they need to easily and natively support other types of storage (unstructured data such as files, VMware datastores and so forth). The Exadata architecture lends itself well to changes in this area in both hardware trends and access protocols." -- James Morle (tags: oracle otn exadata database architecture virtualization) Java / Oracle SOA blog: HTTP binding in Soa Suite 11g PS2 (tags: ping.fm) Confessions of a Software Developer: Some Tips for Installing Oracle BPM 11g on Windows XP (tags: ping.fm) SOA and Java using Oracle technology: Book review: Oracle Coherence 3.5: Create internet scale applications using Oracle's high-performance data grid (tags: ping.fm)

    Read the article

  • Career paths after web development?

    - by Mike
    I know this is open ended, but I'm just curious what you've done after your web development career, or if you've stayed loyal. I have a feeling/read/heard that web development salaries top out at a certain amount.. even after 10-15 years of experience. Reason I ask is that I graduated last summer with a BS in Chemical Engineering.. but have not been able to find a job in California. I've been web designing/developing since high school and thought that I should start a career, even if its not related to my major and not lose more time. Even though I'd really like to have an engineering career, I don't think that will happen. Do you guys have any suggestions or experiences for choices after/ways to enhance your career after several years in web development? Thanks! Update: Thanks for the responses guys! One more question: Is it likely to be accepted into a MS/PhD program if you've been out of uni for a couple years? Or with semi-related job experience? Would I be a bit of a misfit with a BS in ChemE studying CS/CompE for an MS?

    Read the article

  • Optimizing MySQL -

    - by Josh
    I've been researching how to optimize MySQL a bit, but I still have a few questions. MySQL Primer Results http://pastie.org/private/lzjukl8wacxfjbjhge6vw Based on this, the first problem seems to be that the max_connections limit is too low. I had a similar problem with Apache initially, the max connection limit was set to 100, and the web server would frequently lock up and take an excruciatingly long time to deliver pages. Raising the connection limit to 512 fixed this issue, and I read that raising the connection limit on MySQL to match this was considered good practice. Being that MySQL has actually been "locking up" recently as well (connections have been refused entirely for a few minutes at a time at random intervals) I'm assuming this is the main cause of the issue. However, as far as table cache goes, I'm not sure what I should set this as. I've read that setting this too high can hinder performance further, so should I raise this to right around 551, 560, 600, or do something else? Lastly, as far as raising the join_buffer_size value goes, this doesn't even seem to be included in Debian's my.cnf file by default. Assuming there's not much I can do about adding indexes, should I look into raising this? Any suggested values? Any suggestions in general here would be appreciated as well. Edit: Here's the number of open tables the MySQL server is reporting. I believe this value is related to my question (Opened_tables: 22574)

    Read the article

  • links for 2010-05-11

    - by Bob Rhubart
    Fat Bloke: Oracle VM VirtualBox 3.1.8 released! "Supporting new platforms such as Ubuntu 10.04 (Lucid Lynx) and delivering a host of bugfixes, VirtualBox 3.1.8 is available now from the usual places, " says the Fat Bloke. (tags: oracle otn virtualization linux) Anthony Shorten: What is the Oracle Utilities Application Framework? "The Oracle Utilities Application Framework is a reusable, scalable and flexible java based framework which allows other products to be built, configured and implemented in a standard way," according to Anthony Shorten (tags: oracle otn framework java standards) Audio podcast: Oracle WebLogic Suite Virtualization Option (Application Grid) "Steve Harris, Senior Vice President of application server and Java Platform, Enterprise Edition development, talks about running Oracle WebLogic Server on Oracle JRockit Virtual Edition. Listen here to learn how you can run faster and more efficiently without a guest operating system on Oracle VM." (tags: oracle otn grid wweblogic podcast virtualization) MySQL Community Blog: MySQL track with free event at Kaleidoscope 2010 "The even greater news," writes Giuseppe Maxia, "is that, in addition to the general schedule, there are SUNDOWN SESSIONS!" (tags: java sun oracle mysql) @SOAtoday: Will Cloudsourcing Change the Face of Consulting? "Will we all be working remotely to deliver our client projects going forward? Maybe someday, but not anytime soon." -- Oracle ACE Director Jordan Braunstein (tags: oracle otn oracleace cloudcomputing entarch) @SOAtoday: Are we Paid to Say No? "Software architects take their governance initiatives seriously, and I can say with a high level of confidence that most of these denials are highly justified. But, have we architects lost our entrepreneurial spirit, with governance as our defense? Are we over-scrutinizing new ideas and slowing down pilots of innovation because they don’t align with our governance policies and enterprise frameworks?" -- Oracle ACE Director Jordan Braunstein (tags: architect entarch oracle otn soa)

    Read the article

  • Automated Error Reporting in .NET Reflector - harnessing the most powerful test rig in existence

    - by Alex.Davies
    I know a testing system that will find more bugs than all the unit testing, integration testing, and QA you could possibly do. And the chances are you're not using it. It's called your users. It's a cliché that you should test so that you find your bugs rather than your users. Of course you should. But it's also a cliché that no software is ever shipped bug-free. Lost cause? No, opportunity! I think .NET Reflector 6 is pretty stable. In fact I know exactly how stable it is, because some (surprisingly high) proportion of its users tell me every time it crashes: If they press "Send Error Report", I get: And then I fix it. As a rough guess, while a standard stack trace is enough to fix a problem 30% of the time, having all those local variables in the stack trace means I can fix it about 80% of the time. How does this all happen? Did it take ages to code this swish system? Nope, it was one checkbox in SmartAssembly. It adds some clever code to your assembly to capture local variables every time an exception is thrown, and to ask your user to report it to you, with a variety of other useful information. Of course not all bugs show up as exceptions. But if you get used to knowing that SmartAssembly will tell you when an exception happens, you begin to change your coding style. Now, as long as an exception gets thrown in any situation you don't expect, you'll fix it if it ever happens. You'll start throwing exceptions liberally, and stop having to think about whether tiny edge cases are possible, as long as they throw an exception if they happen.

    Read the article

  • Partner Webcast - Oracle Reports to BI Publisher migration

    - by dmitry.nefedkin(at)oracle.com
    Normal 0 false false false RU X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Monday, March 21, 2011 9 a.m. CET (10 a.m. EET) Description Oracle Reports, a component of Oracle Fusion Middleware is Oracle's classic, high-fidelity enterprise reporting tool. Oracle remains committed to the development of this technology, and to the ongoing release as a component of the Oracle Fusion Middleware platform, but also enables conversion of Oracle Reports to Oracle BI Publisher. Oracle Business Intelligence Publisher (BI Publisher)--Oracle's enterprise reporting server to author, manage, and deliver all types of highly formatted documents. Extremely efficient and highly scalable, BI Publisher can generate tens of thousands of documents per hour with minimal impact to transactional systems. After a quick introduction to BI Publisher we will look at the process of Oracle Reports to BI Publisher convestion. AgendaOracle Reports strategy & support policyReporting challengesBenefits of BI PublisherOracle Reports -> BI Publisher Conversion UtilityDemoUpgrade BI Publisher to 11gQ&A Delivery Format This Free online Live Internet Seminar will be delivered over the Web and Conference Call. Duration: 1hour To register, click HERE. For any questions please contact [email protected].

    Read the article

  • How to check system performance?

    - by Woltan
    Hi all, I am a new Ubuntu user and really like the look and the features of the OS. However, I have a feeling that the performance could be better. With that I mean: Somehow the scrolling within firefox of sites seems laggy. I do not know how I should measure it but there is a difference. Not that it is unusable but it is aggravating. Java programs are running really slow. As a comparison (I know it is not a fair one), I tried to run a game using wine. The graphic specifications using windows were much higher (1600x1200) with a high level of detail, and in ubuntu with the lowest level of detail 1024x768 was the maximum. (My graphics card is a GeForce GTS 450 with 1gb RAM) Coming to my question: Is there a way to measure the performance of 3D acceleration, java applets, firefox scrolling etc. with a tool and compare it with lets say a windows OS or other users having almost the same hardware? Maybe it is a setup issue where some fundamental drivers are missing or something!? Any help, link, suggestion is appreciated! Cherio Woltan

    Read the article

  • Craftsmanship Tour Day 1: Didit Long Island

    - by Liam McLennan
    On Monday I was at Didit for my first ever craftsmanship visit. Didit seem to occupy a good part of a non-descript building in Rockville Centre Long Island. Since I had arrived early from Seattle I had some time to kill, so I stopped at the Rockville Diner on the corner of N Park Ave and Sunrise Hwy. I thoroughly enjoyed the pancakes and the friendly service. After walking to the Didit office I met Rik Dryfoos, the Didit Engineering Manager who organised my visit, and got the introduction to Didit and the work they are doing. I spent the morning in the room shared by the Didit developers, who are working on some fascinating deep engineering problems. After lunch at a local Thai place I setup a webcam to record an interview with Rik and Matt Roman (Didit VP of Engineering). I had a lot of trouble with the webcam, including losing several minutes of conversation, but in the end I was very happy the result. Here are the full interviews with Rik and Matt: Interview with Rik Dryfoos Interview with Matt Roman We had a great chat, much of which is captured in the recording. It was such great conversation that I almost missed my train to Manhattan. I’m sure Didit will continue to do well with such a dedicated and enthusiastic team. I sincerely thank them for hosting me for the day. If you are looking for a true agile environment and the opportunity to work with a high quality team then you should talk to Didit.

    Read the article

  • Warnings When Undo Isn't Possible

    - by ultan o'broin
    Enjoyed this post Never Use a Warning When you Mean Undo by Aza Raskin. It makes sense never to warn users if an undo option is possible. The examples given are from the web space. Here's the conclusion: Warnings cause us to lose our work, to mistrust our computers, and to blame ourselves. A simple but foolproof design methodology solves the problem: "Never use a warning when you mean undo." And when a user is deleting their work, you always mean undo. However, in enterprise apps you may find that an undo option isn't technically possible or desirable. Objects may be shared, part of a flow elsewhere, or undoing something committed to the database (a rollback I guess) may not be feasible if it becomes locked by another process. Plus, what constitutes user ownership of objects isn't always clear to users. The implications of delete (and other) actions need to be clearly communicated out in advance. Really, warnings are important in the enterprise space. Data has a very high value, and users can perform a wide variety of actions that may risk that data, not always within the application itself (at browser level, for example). That said, throwing warnings all over the place when an undo option is possible is annoying. Instead, treat warnings with respect. When there is no undo option possible, use warning messages to communicate potentially dangerous or irrecoverable actions or the downstream consequences of user actions on the process or task flow. Force the user to respond to a warning message by using a modal dialog with clearly labeled action buttons. Here's a couple of examples. A great article that got me thinking. Let's see more like that. Let's not forget there's more types of messages than just error messages. User assistance and user experience professionals need to understand when best to use confirmation, information, and warning types too!

    Read the article

  • IASA ITARC &ndash; Denver May 6th

    - by Jeff Certain
    The Denver chapter of the International Association of Software Architects (IASA) is holding an IT Architect Regional Conference (ITARC) in Denver on May 6th. The speaker list for this conference is amazing. Paul Rayner, Dave McComb, Randy Kahle, Peter Provost, Randy Stafford, George Fairbanks – all great speakers, and from Colorado. Brandon Satrom (who also happens to be the president of the IASA Austin chapter) will also be speaking, as will some other heavy hitters (for example, Ted Farrell, Chief Architect and Senior VP of Oracle). This is an amazing line-up, and the conference is quite reasonably priced ($150 for IASA members until April 10th, including a catered lunch). I also have the privilege of being a presenter at this conference. If you’ve ever heard any of the previously named speakers, you know that they set the bar quite high. Sounds like I’m going to have to step up my game. What I get to talk about is really cool stuff. The company I work for – Colorado CustomWare – brought me on board nearly two years ago. To say there was some technical debt is somewhat… understated. Equally understated would be that management is committed to doing the right thing. Over the past two years, we’ve done significant architectural refactoring – including an effort that took the entire team offline for most of a month. We’ve reduced the application size by 50% without losing functionality. As you can imagine, this has reduced the complexity of the application, making development faster and less prone to bugs. We’ve made many other changes – moving to an agile process, training developers, moving towards a more OO architecture. The changes we’ve made reveal, in some ways, just how far afield we were.. and there are still more changes to be made. Amazingly enough, our leadership team is eager for me to share these experiences with other architects. I’m really looking forward to being able to do so.

    Read the article

  • VGA monitor won't detect on NVIDIA Quadro NVS 4200M

    - by tanmayk
    I am on a Dell Latitude E6420 and just installed 12.04. When I plug in my external monitor's VGA cable, the laptop doesn't detect any new monitor. Here's the output of my lspci: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:16.3 Serial controller: Intel Corporation 6 Series/C200 Series Chipset Family KT Controller (rev 04) 00:19.0 Ethernet controller: Intel Corporation 82579LM Gigabit Network Connection (rev 04) 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b4) 00:1c.1 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 (rev b4) 00:1c.2 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 3 (rev b4) 00:1c.3 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 (rev b4) 00:1c.5 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 6 (rev b4) 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation QM67 Express Chipset Family LPC Controller (rev 04) 00:1f.2 RAID bus controller: Intel Corporation 82801 Mobile SATA Controller [RAID mode] (rev 04) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 04) 01:00.0 VGA compatible controller: NVIDIA Corporation GF119 [Quadro NVS 4200M] (rev a1) 01:00.1 Audio device: NVIDIA Corporation HDMI Audio stub (rev a1) 03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6205 (rev 34) 0b:00.0 SD Host controller: O2 Micro, Inc. Device 8221 (rev 05) 0b:00.1 Mass storage controller: O2 Micro, Inc. Device 8231 (rev 03) I tried installing the Quadro NVS 4200M from the nvidia website but got stuck in TTY after the installation finished. Any ideas as to how I can get the external monitor to work? Thanks!

    Read the article

  • Constant game speed independent of variable FPS in OpenGL with GLUT?

    - by Nazgulled
    I've been reading Koen Witters detailed article about different game loop solutions but I'm having some problems implementing the last one with GLUT, which is the recommended one. After reading a couple of articles, tutorials and code from other people on how to achieve a constant game speed, I think that what I currently have implemented (I'll post the code below) is what Koen Witters called Game Speed dependent on Variable FPS, the second on his article. First, through my searching experience, there's a couple of people that probably have the knowledge to help out on this but don't know what GLUT is and I'm going to try and explain (feel free to correct me) the relevant functions for my problem of this OpenGL toolkit. Skip this section if you know what GLUT is and how to play with it. GLUT Toolkit: GLUT is an OpenGL toolkit and helps with common tasks in OpenGL. The glutDisplayFunc(renderScene) takes a pointer to a renderScene() function callback, which will be responsible for rendering everything. The renderScene() function will only be called once after the callback registration. The glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0) takes the number of milliseconds to pass before calling the callback processAnimationTimer(). The last argument is just a value to pass to the timer callback. The processAnimationTimer() will not be called each TIMER_MILLISECONDS but just once. The glutPostRedisplay() function requests GLUT to render a new frame so we need call this every time we change something in the scene. The glutIdleFunc(renderScene) could be used to register a callback to renderScene() (this does not make glutDisplayFunc() irrelevant) but this function should be avoided because the idle callback is continuously called when events are not being received, increasing the CPU load. The glutGet(GLUT_ELAPSED_TIME) function returns the number of milliseconds since glutInit was called (or first call to glutGet(GLUT_ELAPSED_TIME)). That's the timer we have with GLUT. I know there are better alternatives for high resolution timers, but let's keep with this one for now. I think this is enough information on how GLUT renders frames so people that didn't know about it could also pitch in this question to try and help if they fell like it. Current Implementation: Now, I'm not sure I have correctly implemented the second solution proposed by Koen, Game Speed dependent on Variable FPS. The relevant code for that goes like this: #define TICKS_PER_SECOND 30 #define MOVEMENT_SPEED 2.0f const int TIMER_MILLISECONDS = 1000 / TICKS_PER_SECOND; int previousTime; int currentTime; int elapsedTime; void renderScene(void) { (...) // Setup the camera position and looking point SceneCamera.LookAt(); // Do all drawing below... (...) } void processAnimationTimer(int value) { // setups the timer to be called again glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0); // Get the time when the previous frame was rendered previousTime = currentTime; // Get the current time (in milliseconds) and calculate the elapsed time currentTime = glutGet(GLUT_ELAPSED_TIME); elapsedTime = currentTime - previousTime; /* Multiply the camera direction vector by constant speed then by the elapsed time (in seconds) and then move the camera */ SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f)); // Requests to render a new frame (this will call my renderScene() once) glutPostRedisplay(); } void main(int argc, char **argv) { glutInit(&argc, argv); (...) glutDisplayFunc(renderScene); (...) // Setup the timer to be called one first time glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0); // Read the current time since glutInit was called currentTime = glutGet(GLUT_ELAPSED_TIME); glutMainLoop(); } This implementation doesn't fell right. It works in the sense that helps the game speed to be constant dependent on the FPS. So that moving from point A to point B takes the same time no matter the high/low framerate. However, I believe I'm limiting the game framerate with this approach. Each frame will only be rendered when the time callback is called, that means the framerate will be roughly around TICKS_PER_SECOND frames per second. This doesn't feel right, you shouldn't limit your powerful hardware, it's wrong. It's my understanding though, that I still need to calculate the elapsedTime. Just because I'm telling GLUT to call the timer callback every TIMER_MILLISECONDS, it doesn't mean it will always do that on time. I'm not sure how can I fix this and to be completely honest, I have no idea what is the game loop in GLUT, you know, the while( game_is_running ) loop in Koen's article. But it's my understanding that GLUT is event-driven and that game loop starts when I call glutMainLoop() (which never returns), yes? I thought I could register an idle callback with glutIdleFunc() and use that as replacement of glutTimerFunc(), only rendering when necessary (instead of all the time as usual) but when I tested this with an empty callback (like void gameLoop() {}) and it was basically doing nothing, only a black screen, the CPU spiked to 25% and remained there until I killed the game and it went back to normal. So I don't think that's the path to follow. Using glutTimerFunc() is definitely not a good approach to perform all movements/animations based on that, as I'm limiting my game to a constant FPS, not cool. Or maybe I'm using it wrong and my implementation is not right? How exactly can I have a constant game speed with variable FPS? More exactly, how do I correctly implement Koen's Constant Game Speed with Maximum FPS solution (the fourth one on his article) with GLUT? Maybe this is not possible at all with GLUT? If not, what are my alternatives? What is the best approach to this problem (constant game speed) with GLUT? I originally posted this question on Stack Overflow before being pointed out about this site. The following is a different approach I tried after creating the question in SO, so I'm posting it here too. Another Approach: I've been experimenting and here's what I was able to achieve now. Instead of calculating the elapsed time on a timed function (which limits my game's framerate) I'm now doing it in renderScene(). Whenever changes to the scene happen I call glutPostRedisplay() (ie: camera moving, some object animation, etc...) which will make a call to renderScene(). I can use the elapsed time in this function to move my camera for instance. My code has now turned into this: int previousTime; int currentTime; int elapsedTime; void renderScene(void) { (...) // Setup the camera position and looking point SceneCamera.LookAt(); // Do all drawing below... (...) } void renderScene(void) { (...) // Get the time when the previous frame was rendered previousTime = currentTime; // Get the current time (in milliseconds) and calculate the elapsed time currentTime = glutGet(GLUT_ELAPSED_TIME); elapsedTime = currentTime - previousTime; /* Multiply the camera direction vector by constant speed then by the elapsed time (in seconds) and then move the camera */ SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f)); // Setup the camera position and looking point SceneCamera.LookAt(); // All drawing code goes inside this function drawCompleteScene(); glutSwapBuffers(); /* Redraw the frame ONLY if the user is moving the camera (similar code will be needed to redraw the frame for other events) */ if(!IsTupleEmpty(cameraDirection)) { glutPostRedisplay(); } } void main(int argc, char **argv) { glutInit(&argc, argv); (...) glutDisplayFunc(renderScene); (...) currentTime = glutGet(GLUT_ELAPSED_TIME); glutMainLoop(); } Conclusion, it's working, or so it seems. If I don't move the camera, the CPU usage is low, nothing is being rendered (for testing purposes I only have a grid extending for 4000.0f, while zFar is set to 1000.0f). When I start moving the camera the scene starts redrawing itself. If I keep pressing the move keys, the CPU usage will increase; this is normal behavior. It drops back when I stop moving. Unless I'm missing something, it seems like a good approach for now. I did find this interesting article on iDevGames and this implementation is probably affected by the problem described on that article. What's your thoughts on that? Please note that I'm just doing this for fun, I have no intentions of creating some game to distribute or something like that, not in the near future at least. If I did, I would probably go with something else besides GLUT. But since I'm using GLUT, and other than the problem described on iDevGames, do you think this latest implementation is sufficient for GLUT? The only real issue I can think of right now is that I'll need to keep calling glutPostRedisplay() every time the scene changes something and keep calling it until there's nothing new to redraw. A little complexity added to the code for a better cause, I think. What do you think?

    Read the article

  • Design for XML mapping scenarios between two different systems [on hold]

    - by deepak_prn
    Mapping XML fields between two systems is a mundane routine in integration scenarios. I am trying to make the design documents look better and provide clear understanding to the developers especially when we do not use XSLT or any other IDE such as jDeveloper or eclipse plugins. I want it to be a high level design but at the same time talk in developer's language. So that there is no requirements that slip under the crack. For example, one of the scenarios goes: the store cashier sells an item, the transaction data is sent to Data management system. Now, I am writing a functional design for the scenario which deals with mapping XML fields between our system and the data management system. Question : I was wondering if some one had to deal with mapping XML fields between two systems? (without XSLT being involved) and if you used a table to represent the fields mapping (example is below) or any other visualization tool which does not break the bank ? I am trying to find out if there is a better way to represent XML mapping in your design documents. The widely accepted and used method seems to be using a simple table such as in the picture to illustrate the mapping. I am wondering if there are alternate ways/ tools to represent such as in Altova:

    Read the article

  • How can we make agile enjoyable for developers that like to personally, independently own large chunks from start to finish

    - by Kris
    We’re roughly midway through our transition from waterfall to agile using scrum; we’ve changed from large teams in technology/discipline silos to smaller cross-functional teams. As expected, the change to agile doesn’t suit everyone. There are a handful of developers that are having a difficult time adjusting to agile. I really want to keep them engaged and challenged, and ultimately enjoying coming to work each day. These are smart, happy, motivated people that I respect on both a personal and a professional level. The basic issue is this: Some developers are primarily motivated by the joy of taking a piece of difficult work, thinking through a design, thinking through potential issues, then solving the problem piece by piece, with only minimal interaction with others, over an extended period of time. They generally complete work to a high level of quality and in a timely way; their work is maintainable and fits with the overall architecture. Transitioning to a cross-functional team that values interaction and shared responsibility for work, and delivery of working functionality within shorter intervals, the teams evolve such that the entire team knocks that difficult problem over. Many people find this to be a positive change; someone that loves to take a problem and own it independently from start to finish loses the opportunity for work like that. This is not an issue with people being open to change. Certainly we’ve seen a few people that don’t like change, but in the cases I’m concerned about, the individuals are good performers, genuinely open to change, they make an effort, they see how the rest of the team is changing and they want to fit in. It’s not a case of someone being difficult or obstructionist, or wanting to hoard the juiciest work. They just don’t find joy in work like they used to. I’m sure we can’t be the only place that hasn’t bumped up on this. How have others approached this? If you’re a developer that is motivated by personally owning a big chunk of work from end to end, and you’ve adjusted to a different way of working, what did it for you?

    Read the article

  • Suitability of ground fog using layered alpha quads?

    - by Nick Wiggill
    A layered approach would use a series of massive alpha-textured quads arranged parallel to the ground, intersecting all intervening terrain geometry, to provide the illusion of ground fog quite effectively from high up, looking down, and somewhat less effectively when inside the fog and looking toward the horizon (see image below). Alternatively, a shader-heavy approach would instead calculate density as function of view distance into the ground fog substrate, and output the fragment value based on that. Without having to performance-test each approach myself, I would like first to hear others' experiences (not speculation!) on what sort of performance impact the layered alpha texture approach is likely to have. I ask specifically due to the oft-cited impacts of overdraw (not sure how fill-rate bound your average desktop system is). A list of games using this approach, particularly older games, would be immensely useful: if this was viable on pre DX9/OpenGL2 hardware, it is likely to work fine for me. One big question is in regards to this sort of effect: (Image credit goes to Lume of lume.com) Notice how the vertical fog gradation is continuous / smooth. OTOH, using textured quad layers, I can only assume that layers would be mighty obvious when walking through them -- the more sparse they were, the more obvious this would be. This is in contrast to where fog planes are aligned to face the player every frame, where this coarseness would be much less obvious.

    Read the article

  • Learning HTML5 - Best of RSS

    - by Albers
    These are some of the best RSS feeds I've found for keeping up with HTML5. I'm doing jQuery & MVC development as well so you will find the links have a jQuery/MS angle to them. WhenCanIUse The oh-so-necessary caniuse.com, in RSS update format: http://feeds.feedburner.com/WhenCanIUse ScriptJunkie http://services.social.microsoft.com/feeds/feed/query/tag/scriptjunkieLearn/eq/ns/DiscoveryFeedTool/eq/andA good HTML, JavaScript, CSS site hosted by MS Rachel Appel's blog http://rachelappel.com/rss?containerid=13HTML5, JavaScript, and MVC links with a general MS angle Smashing Magazine http://rss1.smashingmagazine.com/feed/Really high quality articles with a focus towards the design side of the web development picture IEBlog blogs.msdn.com/b/ie/rss.aspxNo surprise - the focus is on IE10, but it is really a great resource for new browser tech. MisfitGeek http://feeds.feedburner.com/MisfitGeekJoe recently switched from MS to Mozilla. New job but he still puts out great Weekly Links summaries. The Big Web Show http://feeds.feedburner.com/bigwebshowA podcast covering web development & design topics Elijah Manor/Web Dev .NET I'm cheating on this one a little bit. Elijah is a fantastic JS & web development resource. He has a site at Web Dev .NET, but honestly these days you are better off following him on Google+ ...and you can of course sign up to follow the W3C as well, although I don't think there is an HTML5-specific RSS feed. Good luck!

    Read the article

  • B.S.in Computer Science, weak eyes => career change

    - by Prometheus
    So I am going to earn B.S. in Computer Science soon. I like computers. I like programming. The problem is that my eyes are very weak. Depending on their condition, I can only put in about 6 hours in front of computer a day. If I push myself, I have trouble even keeping my eyes open because of soreness/pain, consequently headaches. My eyes do not have medical conditions. I was just born with weak eyes. I tried many different approaches to work around this problem - better monitor, breaks every 10 minutes, supplements... I even memorized a lot of shortcuts to reduce my time on computers! But I am finally giving up. I do not think I can be a programmer for the rest of my life. I was the top of my class in high school because all works were paper-based, I did average in college due to the nature of my eyes and the difficulty of the material. So what do you recommend I do? Or, Is there a career that is similar to programming but requires interacting with computers less?

    Read the article

  • Optimizing lifestyle and training

    - by Gabe
    I am a college freshman who has recently discovered a passion for computer science. Having had my first lick of formal python training last semester, I have cast aside my previously hedonist way of life and tunneled my sights on becoming the most rounded and proficient programmer I can be. I know that I'm taking strides in the right direction (I've stopped smoking, I've been exercising every day, I've taught myself C++ and OpenGL, and I've begun training in kung-fu and meditation), yet I am still finding myself struggling to achieve satisfactory results. I would like to be able to spend a good 3-4 hours every day burning through textbooks. I have the time cleared and the resources allocated. The problem lies in the logistics-- I have never taken anything seriously before. Recently I've realized that I am clueless when it comes to taking care of myself and gaining control of my mind, and it drastically hinders my productivity. My question is this: How can I learn to manage my time and take care of myself such that I can spend the maximum amount of time every day studying with steady concentration? Personal tricks would be key here: techniques you use to get yourself to sleep, a diet that yields focus, even computer break stretching routines or active reading techniques. Anything you could think of here would be great. I was a low-life in high school and I have the drive to turn my life around, I'm just quite a bit behind in the way of good habits :)

    Read the article

  • How do I handle having too many links on a webpage because of my menu

    - by RandomBen
    I am developing a website that has a drop-down menu at the top of it. The Menu has around 100 links in it that are repeated on every page. Every page also has some number of links below the Menu that may or may not be in the menu itself. My issue is that Google says they generally don't like pages with more than 100 links on them. Is there any way to change the links on the menu so that they no longer "count" towards my max of 100 links? It seems like there should be an easy way to do this but their really doesn't seem to be. the rel=nofollow still counts towards the number of links on the page at least according to Google, so what other options do I have? I looked into where the 100 comes from and I found that it used to be here: http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=35769#2 but that is no longer the case. I found a more definitive and frankly muddier answer here: http://www.seomoz.org/blog/questions-answers-with-googles-spam-guru from Matt Cutts from 2007. Long story short, in 2007 they still felt 100 links was a good number but they stated you could go far beyond that. In fact, they said that pages with high PageRank could have 2-300. It did sound like having many links could reduce the PageRank of the page with all of the links or possibly all of the items linked to. Also, I know IIS7's SEO 1.0 toolkit suggests that pages should have no more than 250 links.

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >