Search Results

Search found 21028 results on 842 pages for 'single player'.

Page 477/842 | < Previous Page | 473 474 475 476 477 478 479 480 481 482 483 484  | Next Page >

  • Best depth sorting method for a Top Down 2D game using a 3D physics engine

    - by Alic44
    I've spent many days googling this and still have issues with my game engine I'd like to ask about, which I haven't seen addressed before. I think the problem is that my game is an unusual combination of a completely 2D graphical approach using XNA's SpriteBatch, and a completely 3D engine (the amazing BEPU physics engine) with rotation mostly disabled. In essence, my question is similar to this one (the part about "faux 3D"), but the difference is that in my game, the player as well as every other creature is represented by 3D objects, and they can all jump, pick up other objects, and throw them around. What this means is that sorting by one value, such as a Z position (how far north/south a character is on the screen) won't work, because as soon as a smaller creature jumps on top of a larger creature, or a box, and walks backwards, the moment its z value is less than that other creature, it will appear to be behind the object it is actually standing on. I actually originally solved this problem by splitting every object in the game into physics boxes which MUST have a Y height equal to their Z depth. I then based the depth sorting value on the object's y position (how high it is off the ground) PLUS its z position (how far north or south it is on the screen). The problem with this approach is that it requires all moving objects in the game to be split graphically into chunks which match up with a physical box which has its y dimension equal to its z dimension. Which is stupid. So, I got inspired last night to rewrite with a fresh approach. My new method is a little more complex, but I think a little more sane: every object which needs to be sorted by depth in the game exposes the interface IDepthDrawable and is added to a list owned by the DepthDrawer object. IDepthDrawable contains: public interface IDepthDrawable { Rectangle Bounds { get; } //possibly change this to a class if struct copying of the xna Rectangle type becomes an issue DepthDrawShape DepthShape { get; } void Draw(SpriteBatch spriteBatch); } The Bounds Rectangle of each IDepthDrawable object represents the 2D Axis-Aligned Bounding Box it will take up when drawn to the screen. Anything that doesn't intersect the screen will be culled at this stage and the remaining on-screen IDepthDrawables will be Bounds tested for intersections with each other. This is where I get a little less sure of what I'm doing. Each group of collisions will be added to a list or other collection, and each list will sort itself based on its DepthShape property, which will have access to the object-to-be-drawn's physics information. For starting out, lets assume everything in the game is an axis aligned 3D Box shape. Boxes are pretty easy to sort. Something like: if (depthShape1.Back > depthShape2.Front) //if depthShape1 is in front of depthShape2. //depthShape1 goes on top. else if (depthShape1.Bottom > depthShape2.Top) //if depthShape1 is above depthShape2. //depthShape1 goes on top. //if neither of these are true, depthShape2 must be in front or above. So, by sorting draw order by several different factors from the physics engine, I believe I can get a really correct draw order. My question is, is this a good way of going about this, or is there some tried and true, tested way which is completely different and has somehow completely eluded me on the internets? And, if this does seem like a good way to remake my draw order sorting, what's the right sorting algorithm for reordering the Bounds Rectangle collision lists, and how do you deal with a Bounds Rectangle colliding with two different object which don't collide with eachother. I know these are solved problems, but I've only been programming for a year so any specific input here will be greatly appreciated. Thanks for reading this far, ye who made it -- sorry it was so long!

    Read the article

  • Oracle Brings Analytics to Project Management

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Alison Weiss  Nonprofit and for-profit organizations have many differences, but there is one way they are alike—managers struggle with huge amounts of data generated every day. Project data by itself has limited use—but any organization that can gain insight to make accurate predictions or to use resources more effectively can gain an operational advantage. Oracle’s Primavera P6 Analytics 2.0 business intelligence solution enables organizations using Oracle’s Primavera P6 Professional Project Management to do just that: identify critical issues and uncover trends in stores of project data. Primavera P6 Analytics provides management with the ability to look at not only how a single effort is progressing, but also how the entire organization is doing from a project perspective. The latest release includes new features that make it even easier to gather and analyze critical information. For example, the addition of geocoding gives Primavera P6 Analytics users the ability to track resources geographically on longitude and latitude and use a map to get an overall view of how projects, programs, and activities are deployed. “A nonprofit with relief projects in Vietnam, for example, can drill down to the project and get a world view and a regional view,” says Yasser Mahmud, vice president of product strategy and industry marketing in Oracle’s Primavera Global Business Unit. “Then they can drill down further to show statistics; key performance indicators; and how that program, portfolio, or project work is actually getting done.” The addition of new mobile capabilities to Primavera P6 Analytics puts deep-dive analysis into project managers’ hands with compatibility with major tablet operating systems. Now, nonprofits or for-profits working in remote locations can provide real-time visibility into projects to alert management if issues are occurring that need to be addressed immediately. “Primavera P6 Analytics generates information that can help organizations improve their utilization and trim down overall operating costs,” says Mahmud. “But more importantly, it gives organizations improved visibility.”

    Read the article

  • Building Interactive User Interfaces with Microsoft ASP.NET AJAX: Refreshing An UpdatePanel With Jav

    The ASP.NET AJAX UpdatePanel provides a quick and easy way to implement a snappier, AJAX-based user interface in an ASP.NET WebForm. In a nutshell, UpdatePanels allow page developers to refresh selected parts of the page (instead of refreshing the entire page). Typically, an UpdatePanel contains user interface elements that would normally trigger a full page postback - controls like Buttons or DropDownLists that have their AutoPostBack property set to True. Such controls, when placed inside an UpdatePanel, cause a partial page postback to occur. On a partial page postback only the contents of the UpdatePanel are refreshed, avoiding the "flash" of having the entire page reloaded. (For a more in-depth look at the UpdatePanel control, refer back to the Using the UpdatePanel installment in this article series.) Triggering a partial page postback refreshes the contents within an UpdatePanel, but what if you want to refresh an UpdatePanel's contents via JavaScript? Ideally, the UpdatePanel would have a client-side function named something like Refresh that could be called from script to perform a partial page postback and refresh the UpdatePanel. Unfortunately, no such function exists. Instead, you have to write script that triggers a partial page postback for the UpdatePanel you want to refresh. This article looks at how to accomplish this using just a single line of markup/script and includes a working demo you can download and try out for yourself. Read on to learn more! Read More >

    Read the article

  • database on SSD: data only or the DBM program too?

    - by simone
    I plan on moving the data I use for statistical analysis (100-ish Gb) onto an SSD. The data is either sqlite single-file db's, or postgresql-managed data. The SSD is 240 Gb, 550 MB/s read and 520 MB/s write. Should I reserve that space for the data only, or would it be a good idea to install the operating system (Mac OS X) and the application directory (Adobe Suite, Microsoft Office and the like) on the SSD too? And would it make a substantial speed difference whether I also install the postgresql binaries on the SSD? I have plenty of other space (another 300Gb hard-drive, and a 1Tb one). Don't know the features of the non-SSD drives, though they're our standard equipment on all Macs, and they're definitely OK. Thanks.

    Read the article

  • Using a KVM with a Sun Ultra 45

    - by Kit Peters
    I have a friend with two machines: a Mac and a Sun Ultra45. He wants to use a single monitor (connected via DVI) and USB keyboard / mouse with both machines. Try as I might, the Sun box will not bring up a console if the KVM is connected. However, if I connect a monitor, keyboard, and mouse directly to the Sun box, it will bring up a console that I can then (IIRC) connect to the KVM. Is there any way I can make the Sun box always bring up a console, regardless of whether it "sees" a monitor, keyboard, and mouse?

    Read the article

  • Building a Redundant / Distrubuted Application

    - by MattW
    This is more of a "point me in the right direction" question. I (and my team of 3) have built a hosted web app that queues and routes customer chat requests to available customer service agents (It does other things as well, but this is enough background to illustrate the issue). The basic dev architecture today is: a single page ajax web UI (ASP.NET MVC) with floating chat windows (think Gmail) a backend Windows service to queue and route the chat requests this service also logs the chats, calculates service levels, etc a Comet server product that routes data between the web frontend and the backend Windows service this also helps us detect which Agents are still connected (online) And our hardware architecture today is: 2 servers to host the web UI portion of the application a load balancer to route requests to the 2 different web app servers a third server to host the SQL Server DB and the backend Windows service responsible for queuing / delivering chats So as it stands today, one of the web app servers could go down and we would be ok. However, if something would happen to the SQL Server / Windows Service server we would be boned. My question - how can I make this backend Windows service logic be able to be spread across multiple machines (distributed)? The Windows service is written to accept requests from the Comet server, check for available Agents, and route the chat to those agents. How can I make this more distributed? How can I make it so that I can distribute the work of the backend Windows service can be spread across multiple machines for redundancy and uptime purposes? Will I need to re-write it with distributed computing in mind? I should also note that I am hosting all of this on Rackspace Cloud instances - so maybe it is something I should be less concerned about? Thanks in advance for any help!

    Read the article

  • Cross-platform, human-readable, du on root partition that truly ignores other filesystems

    - by nice_line
    I hate this so much: Linux builtsowell 2.6.18-274.7.1.el5 #1 SMP Mon Oct 17 11:57:14 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux df -kh Filesystem Size Used Avail Use% Mounted on /dev/mapper/mpath0p2 8.8G 8.7G 90M 99% / /dev/mapper/mpath0p6 2.0G 37M 1.9G 2% /tmp /dev/mapper/mpath0p3 5.9G 670M 4.9G 12% /var /dev/mapper/mpath0p1 494M 86M 384M 19% /boot /dev/mapper/mpath0p7 7.3G 187M 6.7G 3% /home tmpfs 48G 6.2G 42G 14% /dev/shm /dev/mapper/o10g.bin 25G 7.4G 17G 32% /app/SIP/logs /dev/mapper/o11g.bin 25G 11G 14G 43% /o11g tmpfs 4.0K 0 4.0K 0% /dev/vx lunmonster1q:/vol/oradb_backup/epmxs1q1 686G 507G 180G 74% /rpmqa/backup lunmonster1q:/vol/oradb_redo/bisxs1q1 4.0G 1.6G 2.5G 38% /bisxs1q/rdoctl1 lunmonster1q:/vol/oradb_backup/bisxs1q1 686G 507G 180G 74% /bisxs1q/backup lunmonster1q:/vol/oradb_exp/bisxs1q1 2.0T 1.1T 984G 52% /bisxs1q/exp lunmonster2q:/vol/oradb_home/bisxs1q1 10G 174M 9.9G 2% /bisxs1q/home lunmonster2q:/vol/oradb_data/bisxs1q1 52G 5.2G 47G 10% /bisxs1q/oradata lunmonster1q:/vol/oradb_redo/bisxs1q2 4.0G 1.6G 2.5G 38% /bisxs1q/rdoctl2 ip-address1:/vol/oradb_home/cspxs1q1 10G 184M 9.9G 2% /cspxs1q/home ip-address2:/vol/oradb_backup/cspxs1q1 674G 314G 360G 47% /cspxs1q/backup ip-address2:/vol/oradb_redo/cspxs1q1 4.0G 1.5G 2.6G 37% /cspxs1q/rdoctl1 ip-address2:/vol/oradb_exp/cspxs1q1 4.1T 1.5T 2.6T 37% /cspxs1q/exp ip-address2:/vol/oradb_redo/cspxs1q2 4.0G 1.5G 2.6G 37% /cspxs1q/rdoctl2 ip-address1:/vol/oradb_data/cspxs1q1 160G 23G 138G 15% /cspxs1q/oradata lunmonster1q:/vol/oradb_exp/epmxs1q1 2.0T 1.1T 984G 52% /epmxs1q/exp lunmonster2q:/vol/oradb_home/epmxs1q1 10G 80M 10G 1% /epmxs1q/home lunmonster2q:/vol/oradb_data/epmxs1q1 330G 249G 82G 76% /epmxs1q/oradata lunmonster1q:/vol/oradb_redo/epmxs1q2 5.0G 609M 4.5G 12% /epmxs1q/rdoctl2 lunmonster1q:/vol/oradb_redo/epmxs1q1 5.0G 609M 4.5G 12% /epmxs1q/rdoctl1 /dev/vx/dsk/slaxs1q/slaxs1q-vol1 183G 17G 157G 10% /slaxs1q/backup /dev/vx/dsk/slaxs1q/slaxs1q-vol4 173G 58G 106G 36% /slaxs1q/oradata /dev/vx/dsk/slaxs1q/slaxs1q-vol5 75G 952M 71G 2% /slaxs1q/exp /dev/vx/dsk/slaxs1q/slaxs1q-vol2 9.8G 381M 8.9G 5% /slaxs1q/home /dev/vx/dsk/slaxs1q/slaxs1q-vol6 4.0G 1.6G 2.2G 42% /slaxs1q/rdoctl1 /dev/vx/dsk/slaxs1q/slaxs1q-vol3 4.0G 1.6G 2.2G 42% /slaxs1q/rdoctl2 /dev/mapper/appoem 30G 1.3G 27G 5% /app/em Yet, I equally, if not quite a bit more, also hate this: SunOS solarious 5.10 Generic_147440-19 sun4u sparc SUNW,SPARC-Enterprise Filesystem size used avail capacity Mounted on kiddie001Q_rpool/ROOT/s10s_u8wos_08a 8G 7.7G 1.3G 96% / /devices 0K 0K 0K 0% /devices ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 15G 1.8M 15G 1% /etc/svc/volatile objfs 0K 0K 0K 0% /system/object sharefs 0K 0K 0K 0% /etc/dfs/sharetab fd 0K 0K 0K 0% /dev/fd kiddie001Q_rpool/ROOT/s10s_u8wos_08a/var 31G 8.3G 6.6G 56% /var swap 512M 4.6M 507M 1% /tmp swap 15G 88K 15G 1% /var/run swap 15G 0K 15G 0% /dev/vx/dmp swap 15G 0K 15G 0% /dev/vx/rdmp /dev/dsk/c3t4d4s0 3 20G 279G 41G 88% /fs_storage /dev/vx/dsk/oracle/ora10g-vol1 292G 214G 73G 75% /o10g /dev/vx/dsk/oec/oec-vol1 64G 33G 31G 52% /oec/runway /dev/vx/dsk/oracle/ora9i-vol1 64G 33G 31G 59% /o9i /dev/vx/dsk/home 23G 18G 4.7G 80% /export/home /dev/vx/dsk/dbwork/dbwork-vol1 292G 214G 73G 92% /db03/wk01 /dev/vx/dsk/oradg/ebusredovol 2.0G 475M 1.5G 24% /u21 /dev/vx/dsk/oradg/ebusbckupvol 200G 32G 166G 17% /u31 /dev/vx/dsk/oradg/ebuscrtlvol 2.0G 475M 1.5G 24% /u20 kiddie001Q_rpool 31G 97K 6.6G 1% /kiddie001Q_rpool monsterfiler002q:/vol/ebiz_patches_nfs/NSA0304 203G 173G 29G 86% /oracle/patches /dev/odm 0K 0K 0K 0% /dev/odm The people with the authority don't rotate logs or delete packages after install in my environment. Standards, remediation, cohesion...all fancy foreign words to me. ============== How am I supposed to deal with / filesystem full issues across multiple platforms that have a devastating number of mounts? On Red Hat el5, du -x apparently avoids traversal into other filesystems. While this may be so, it does not appear to do anything if run from the / directory. On Solaris 10, the equivalent flag is du -d, which apparently packs no surprises, allowing Sun to uphold its legacy of inconvenience effortlessly. (I'm hoping I've just been doing it wrong.) I offer up for sacrifice my Frankenstein's monster. Tell me how ugly it is. Tell me I should download forbidden 3rd party software. Tell me I should perform unauthorized coreutils updates, piecemeal, across 2000 systems, with no single sign-on, no authorized keys, and no network update capability. Then, please help me make this bastard better: pwd / du * | egrep -v "$(echo $(df | awk '{print $1 "\n" $5 "\n" $6}' | \ cut -d\/ -f2-5 | egrep -v "[0-9]|^$|Filesystem|Use|Available|Mounted|blocks|vol|swap")| \ sed 's/ /\|/g')" | egrep -v "proc|sys|media|selinux|dev|platform|system|tmp|tmpfs|mnt|kernel" | \ cut -d\/ -f1-2 | sort -k2 -k1,1nr | uniq -f1 | sort -k1,1n | cut -f2 | xargs du -shx | \ egrep "G|[5-9][0-9]M|[1-9][0-9][0-9]M" My biggest failure and regret is that it still requires a single character edit for Solaris: pwd / du * | egrep -v "$(echo $(df | awk '{print $1 "\n" $5 "\n" $6}' | \ cut -d\/ -f2-5 | egrep -v "[0-9]|^$|Filesystem|Use|Available|Mounted|blocks|vol|swap")| \ sed 's/ /\|/g')" | egrep -v "proc|sys|media|selinux|dev|platform|system|tmp|tmpfs|mnt|kernel" | \ cut -d\/ -f1-2 | sort -k2 -k1,1nr | uniq -f1 | sort -k1,1n | cut -f2 | xargs du -shd | \ egrep "G|[5-9][0-9]M|[1-9][0-9][0-9]M" This will exclude all non / filesystems in a du search from the / directory by basically munging an egrepped df from a second pipe-delimited egrep regex subshell exclusion that is naturally further excluded upon by a third egrep in what I would like to refer to as "the whale." The munge-fest frantically escalates into some xargs du recycling where -x/-d is actually useful, and a final, gratuitous egrep spits out a list of directories that almost feels like an accomplishment: Linux: 54M etc/gconf 61M opt/quest 77M opt 118M usr/ ##===\ 149M etc 154M root 303M lib/modules 313M usr/java ##====\ 331M lib 357M usr/lib64 ##=====\ 433M usr/lib ##========\ 1.1G usr/share ##=======\ 3.2G usr/local ##========\ 5.4G usr ##<=============Ascending order to parent 94M app/SIP ##<==\ 94M app ##<=======Were reported as 7gb and then corrected by second du with -x. Solaris: 63M etc 490M bb 570M root/cores.ric.20100415 1.7G oec/archive 1.1G root/packages 2.2G root 1.7G oec Guess what? It's really slow. Edit: Are there any bash one-liner heroes out there than can turn my bloated abomination into divine intervention, or at least something resembling gingerly copypasta?

    Read the article

  • Force Blank TextBox with ASP.Net MVC Html.TextBox

    - by Doug Lampe
    I recently ran into a problem with the following scenario: I have data with a parent/child data with a one-to-many relationship from the parent to the child. I want to be able to update parent and existing child data AND add a new child record all in a single post. I don't want to create a model just to store the new values. One of the things I LOVE about MVC is how flexible it is in dealing with posted data.  If you have data that isn't in your model, you can simply use the non-strongly-typed HTML helper extensions and pass the data into your actions as parameters or use the FormCollection.  I thought this would give me the solution I was looking for.  I simply used Html.TextBox("NewChildKey") and Html.TextBox("NewChildValue") and added parameters to my action to take the new values.  So here is what my action looked like: [HttpPost] public ActionResult EditParent(int? id, string newChildKey, string newChildValue, FormCollection forms) {     Model model = ModelDataHelper.GetModel(id ?? 0);     if (model != null)     {         if (TryUpdateModel(model))         {             if (ModelState.IsValid)             {                 model = ModelDataHelper.UpdateModel(model);             }             string[] keys = forms.GetValues("ChildKey");             string[] values = forms.GetValues("ChildValue");             ModelDataHelper.UpdateChildData(id ?? 0, keys, values);             ModelDataHelper.AddChildData(id ?? 0, newChildKey, newChildValue);             model = ModelDataHelper.GetModel(id ?? 0);         }        return View(report);     }    return new EmptyResult(); } The only problem with this is that MVC is TOO smart.  Even though I am not using a model to store the new child values, MVC still passes the values back to the text boxes via the model state.  The fix for this is simple but not necessarily obvious, simply remove the data from the model state before returning the view: ModelState.Remove("NewChildKey"); ModelState.Remove("NewChildValue"); Two lines of code to save a lot of headaches.

    Read the article

  • Dovecot ignoring maximum number of IMAP connections

    - by Michelle
    I have a single mailbox mail server running Dovecot/Postfix and I have two IMAP clients, Thunderbird on the PC and K9 on Android. I keep on receiving this error in my logs even after I change the 'mail_max_userip_connections' variable to 50. puppet dovecot: imap-login: Maximum number of connections from user+IP exceeded (mail_max_userip_connections=10): user=<[email protected]>, method=PLAIN, rip=62.242.90.2, lip=198.29.31.229, TLS Why does it say that it is set to 10 in the log? Is that hardcoded? grep -r "mail_max_userip_connections" /etc/dovecot /etc/dovecot/conf.d/20-managesieve.conf: #mail_max_userip_connections = 10 /etc/dovecot/conf.d/20-pop3.conf: #mail_max_userip_connections = 3 /etc/dovecot/conf.d/20-imap.conf: mail_max_userip_connections = 50 I've restarted dovecot after making the changes but this error is still logged and I can't access the mailbox. Can anyone help me understand why I can't seem to raise the maximum limit?

    Read the article

  • Leveraging Code in Ever Bigger Games

    - by ashes999
    Summary: The same way that I continually build complex engines and libraries within a single platform and technology to allow me to build increasingly bigger and better games, how to continue this when development crosses into different platforms? If I switch platforms, how do I leverage past code and experiences? Games are hard to build. Big games are even harder to build. I've decided that to be able to make big games, I need to start building smaller games, and building up an asset base of code, assets (graphics, sounds), tools, and most importantly, game engines, so that I can eventually get there. One game at a time. Let me give an analogy. To build an MMO 3D RPG, I would approach this by building and releasing small games with increasingly more features. This could entail, for example: A simple 2D game A tile-based game A game with RPG elements (items, equipment, monsters, battle) A full-fledged RPG A 3D RPG The problem now is if I have to change platforms or tools, I don't know how to leverage past code-bases (and experience) to start with a mature product. Right now, I'm writing Silverlight (FlatRedBall) games. Let's say I stick with this for ten years, and then suddenly decide to write a PS6 game, which is in a different programming language entirely. Granted, I have ten years of game-development experience (and correspondingly ten years of professional software development experience from my day job) to back me up. But I would still like some way to transplant that 2D RPG engine into the new programming language, or else leverage it somehow. Is this even possible? What are my options?

    Read the article

  • Windows 2008 Server in Amazon EC2 stops responding when SSTP/VPN connection is closed

    - by user38349
    All, I have a single Windows 2008 server running in Amazon's EC2 cloud. It's running a web application that is running fine and is accessible to the outside world. I need 3-5 developers to be able to work on database on the server, and was intending to accomplish this by setting up SSTP/RRAS on the server and letting them VPN in. This has been a bit of an ordeal due to the amount of server roles and messing with certificates that has been needed, but my VPN connection works now (all clients will be Windows 7). My problem is that when I drop my VPN connection (from the client side) the server hangs. The only way that I've found to get it back is to reboot it from the Amazon management console. Thanks for any guidance. Duncan

    Read the article

  • Windows 2008 Server in Amazon EC2 stops responding when SSTP/VPN connection is closed

    - by user38349
    All, I have a single Windows 2008 server running in Amazon's EC2 cloud. It's running a web application that is running fine and is accessible to the outside world. I need 3-5 developers to be able to work on database on the server, and was intending to accomplish this by setting up SSTP/RRAS on the server and letting them VPN in. This has been a bit of an ordeal due to the amount of server roles and messing with certificates that has been needed, but my VPN connection works now (all clients will be Windows 7). My problem is that when I use my VPN connection (from the client side) the server hangs - although not at any any consistent place, sometimes it's when I close the connection, some times when I'm making the connection). The only way that I've found to get it back is to reboot it from the Amazon management console. Thanks for any guidance. Duncan

    Read the article

  • Should I pick up a functional programming language?

    - by Statement
    I have recently been more concerned about the way I write my code. After reading a few books on design patterns (and overzealous implementation of them, I'm sure) I have shifted my thinking greatly toward encapsulating that which change. I tend to notice that I write less interfaces and more method-oriented code, where I love to spruce life into old classes with predicates, actions and other delegate tasks. I tend to think that it's often the actions that change, so I encapsulate those. I even often, although not always, break down interfaces to a single method, and then I prefer to use a delegate for the task instead of forcing client code to create a new class. So I guess it then hit me. Should I be doing functional programming instead? Edit: I may have a misconception about functional programming. Currently my language of choice is C#, and I come from a C++ background. I work as a game developer but I am currently unemployed. I have a great passion for architecture. My virtues are clean, flexible, reusable and maintainable code. I don't know if I have been poisoned by these ways or if it is for the better. Am I having a refactoring fever or should I move on? I understand this might be a question about "use the right tool for the job", but I'd like to hear your thoughts. Should I pick up a functional language? One of my fear factors is to leave the comfort of Visual Studio.

    Read the article

  • OpenGL CPU vs. GPU

    - by Nitrex88
    So I've always been under the impression that doing work on the GPU is always faster than on the CPU. Because of this, in OpenGL, I usually try to do intensive tasks in shaders so they get the speed boost from the GPU. However, now I'm starting to realize that some things simply work better on the CPU and actually perform worse on the GPU (particularly when a geometry shader is involved). For example, in a recent project I did involving procedurally generated terrain, I tried passing a grid of single triangles into a geometry shader, and tesselated each of these triangles into quads with 400 vertices whose height was determined by a noise function. This worked fine, and looked great, but easily maxed out the GPU with only 25 base triangles and caused a very slow framerate. I then discovered that tesselating on the CPU instead, and setting the height (using noise function) in the vertex shader was actually faster! This prompted me to question the benefits of using the GPU as much as possible... So, I was wondering if someone could describe the general pros and cons of using the GPU vs CPU for intensive graphics tasks. I know this mainly comes down to what your trying to achieve, so if necessary, use the above scenario to discuss why the "CPU + vertex shader" was actually faster than doing everything in the geometry shader on the GPU. It's possible my hardware (newest macbook pro) isn't optomized well for the geometry shader (thus causing the slow framerate). Also, I read that the vertex shader is very good with parallelism, and would love a quick explanation of how this may have played a role in speeding up my procedural terrain. Any info/advice about CPU/GPU/shaders would be awesome!

    Read the article

  • How do I account for changed or forgotten tasks in an estimate?

    - by Andrew
    To handle task-level estimates and time reporting, I have been using (roughly) the technique that Steve McConnell describes in Chapter 10 of Software Estimation. Specifically, when the time comes for me to create task-level estimates (right before coding begins on a project), I determine the tasks at a fairly granular level so that, whenever possible, I have no tasks with a single-point, 50%-confidence estimate greater than four hours. That way, the task estimation process helps with constructing the software while helping me not to forget tasks during estimation. I come up with a range of hours possible for each task also, and using the statistical calculations that McConnell describes along with my historical accuracy data, I can generate estimates at other confidence levels when desired. I feel like this method has been working fairly well for me. We are required to put tasks and their estimates into TFS for tracking, so I use the estimates at the percentage of confidence I am told to use. I am unsure, however, what to do when I do forget a task, or I end up needing to do work that does not neatly fall within one of the tasks I estimated. Of course, trying to avoid this situation is best, but how do I account for forgotten/changed tasks? I want to have the best historical data I can to help me with future estimates, but right now, I basically am just calculating whether I made the 50%-confidence estimate and whether I made it inside the ranged estimate. I'll be happy to clarify what I'm asking if needed -- let me know what is unclear.

    Read the article

  • It it possible to have multiple ReWrite rules that all do the same Action, for an IIS7.5 webserver?

    - by Pure.Krome
    I've got rewrite module working great for my IIS7.5 site. Now, I wish to add a number of urls that all goto an HTTP 410-Gone status. Eg. <rule name="Old Site = image1" patternSyntax="ExactMatch" stopProcessing="true"> <match url="image/loading_large.gif"/> <match url="image/aaa.gif"/> <match url="image/bbb.gif"/> <match url="image/ccc.gif"/> <action type="CustomResponse" statusCode="410" statusReason="Gone" statusDescription="The requested resource is no longer available" /> </rule> but that's invalid - the website doesn't start saying there's a rewrite config error. Is there another way I can do this? I don't particularly want define a single URL and ACTION for each url.

    Read the article

  • Nginx routing script for NodeJS and Wordpress

    - by Nilay Parikh
    We are moving blogs and site from wordpress to nodejs and ready to move into production. However I'm not able to figure it out how to implement routing from front server (Nginx) to NodeJS (prefered web instance) and if data not synced yet into NodeJS website than (404 will throw by NodeJS) fall back to (using reverse proxy) to Wordpress and serve page, during the transformation period. Q1. Is the approach good for the scenario, or anyone can suggest better approach? Q2. Should NodeJS treat itself as Reverse proxy (using bouncy : https://github.com/substack/bouncy or similar package) in event of fall back or shoud stick with Nginx to do so using fastcgi approch. Both NodeJS and Wordpress are on single server only, In first scenario, /if resource available than serve directly User -> Nginx -> NodeJS (8080) \if resource not available then reverse query wordpress and serve content second scenario, /if resource available than serve directly User -> Nginx -> NodeJS (8080) \if resource not available then 404 to Nginx and Nginx script fallback to Wordpress (FastCGI PHP) Later we have plan to phase out Wordpress and PHP from the server environment completely. I'd like to see any examples of Nginx or Varnish scripts and/or NodeJS scripts if you have for me to refer. Thanks.

    Read the article

  • Small-scale database options for .NET

    - by raney
    I have a .NET 4.0/WPF based application I've developed and maintain for my company that acts as a friendly GUI central-point-of-information, combining information pulled from a couple of SQL databases, as well as CSV exports from a few other applications. I would like to build out my own database to support the entirety of the information that the application accesses, so that I could have a service running on my server that would read in necessary remote SQL info and file exports, to provide the user's application with a single database to connect to, as well as to remove all of the file handling currently involved in the program (copying new CSV resources from network location, reading them into memory each launch.) I have complete control and flexibility here as long as the user's experience isn't affected, and this is as much a learning experience as it is tidying up. Caveat being, I don't have much in the way of a budget. Right now I recognize my options to be: SQL Express - I'm comfortable with the server setup, I like ADO.NET and LINQ to SQL. I feel that I have the least to learn here, but it would let me focus on SQL in a familiar environment. Perhaps in conjunction with Entity Framework? MongoDB - I don't know a whole lot about, but I've heard the name enough to make me curious. Brief research seems friendly enough, and there is .NET support. I like working with open source projects. My questions are: What's popular and extensible right now? I'm not far from starting to job-hunt, and I'd like this project to be relevant going forward. What am I missing? Pros, cons? Other options? What plays well with .NET? What are the things I should be considering, the questions I should be asking, when making a decision like this? Thanks for your time.

    Read the article

  • KVM switching on the cheap

    - by Stuart Hemming
    I have a machine at home for my personal use. I also have a machine for work. I have a pair of DVI monitors. An ideal set up for a KVM switch. Sadly the cheapest switch I can find that will switch 2 DVI monitors is nearly £300. However, I can get, for just £40, a switch for a single monitor. Is there any reason why I couldn't employ 2 of these much cheaper switches? My thought was to have the keyboard, mouse and one of the monitors in one switch and just the 2nd monitor in the 2nd switch. Logically, I can't think of a reason why this wouldn't work, but I have no real background to be able to say, definatively, whether or not it would. It it makes any difference, both machines are running Win7. Appreciate any thoughts. -- Stuart

    Read the article

  • How do we keep Active Directory resilient across multiple sites?

    - by Alistair Bell
    I handle much of the IT for a company of around 100 people, spread across about five sites worldwide. We're using Active Directory for authentication, mostly served to Linux (CentOS 5) systems via LDAP. We've been suffering through a spate of events where the IP tunnel between the two major sites goes down and the secondary domain controller at one site can't contact the primary domain controller at the other. It seems that the secondary domain controller starts denying user authentication within minutes of losing connectivity to the primary. How do we make the secondary domain controller more resilient to downtime? Is there a way for it to cache the entire directory and/or at least keep enough information locally to survive a multi-hour disconnection? (We're all in a single organizational unit if that makes any difference.) (The servers here are Windows Server 2003; don't assume that we set this up correctly. I'm a software engineer, not an IT specialist.)

    Read the article

  • Reverting problems caused by checkinstall with gcc build

    - by slavik262
    I recently downloaded the GCC 4.6.2 source in order to play around a bit with C++11. Having been told about checkinstall and its usefulness in installing programs from source, I created a Debian package from the install using sudo checkinstall -D make install. Wanting to see how well the newly created package worked, I removed it using Synaptic Package Manager. As it turns out, the package checkinstall created from make install tried to remove every single file the installation process touched, including shared gcc libraries like /lib64/libgcc_s.so. Despite not being able to run a bunch of programs due to this missing dependency, I was able to restore my system back to normal by reinstalling the package from command line using dpkg. At this point I want to remove the package from the package manager since it's so dangerous, but not remove the installed files. I was looking around in /var/lib/dpkg and found that the package manager seems to be based on text files which list packages and such - can I just remove all mention of the package from the files in /var/lib/dpkg, or is there a safer way to go about this?

    Read the article

  • Spotlight: How Scandinavia's Largest Nuclear Power Plant Increased Productivity and Reduced Costs wi

    - by [email protected]
    Ringhals nuclear power plant, which is part of the Vattenfall Group, is located about 60 km south-west of the beautiful coastal city of Gothenburg in Sweden. A deep concern to reduce environmental impact coupled with an effort to increase plant safety and operational efficiency have led to a recent surge in investments and initiatives around plant modification and plant optimization at Ringhals. A multitude of challenges were faced by the users in various groups that were involved in these projects. First, it was very difficult for users to easily access complex and layered asset and engineering information, which was critical to increased productivity and completing projects on time. Moreover, the 20 or so different solutions that were being used to view various document formats, not only resulted in collaboration complexity but also escalated IT administration costs and woes. Finally, there was a considerable non-engineering community comprising non-CAD specialists that needed easy access to plant data in an effort to minimize engineering disruption. Oracle's AutoVue significantly simplified the ability to efficiently view and use digital asset information by providing a standardized visualization solution for the enterprise. The key benefits achieved by Ringhals include: Increased productivity of plant optimization and plant modification by 3% Saved around $ 500 K annually Cut IT maintenance costs by 50% by using a single solution Reduced engineering disruption by allowing non-CAD users easy access to digital plant data The complete case-study can be found here

    Read the article

  • Getting .deb package dependencies for an offline Ubuntu computer through Windows

    - by user109500
    Basically I want to "batch" download a .deb file and all its dependencies at once on a Windows 7 machine (of which I do not have admin access, it is a public computer.) I've seen plenty of Ubuntu based fixes that require terminal and apt, I'm asking how to do this on Windows. (I am not sure if this question fits here but I haven't found anywhere else that it could go.) I've tried Keryx and Sushi-huh to try to get packages and their dependencies but these both require Python, Python can't be normally installed without admin permission. (Side note, I think I've seen programs bundle python so they can work without installing it to c:, is this possible as a workaround? Google isn't helping) If anyone wants to know I'm trying to download Krita and Blender for Ubuntu 12.10/AMD64 I have been able to manually download single .deb files and dependencies upwards to 38 dependent packages, but then those 38 packages depend on other packages, It's maddening to not have some way to automatically do this on Windows. *Edit Sorry I forgot to make it clear that my personal home computer is running Ubuntu 12.10 and the public computer I'm using to download is Windows 7

    Read the article

  • Lessons From OpenId, Cardspace and Facebook Connect

    - by mark.wilcox
    (c) denise carbonell I think Johannes Ernst summarized pretty well what happened in a broad sense in regards to OpenId, Cardspace and Facebook Connect. However, I'm more interested in the lessons we can take away from this. First  - "Apple Lesson" - If user-centric identity is going to happen it's going to require not only technology but also a strong marketing campaign. I'm calling this the "Apple Lesson" because it's very similar to how Apple iPad saw success vs the tablet market. The iPad is not only a very good technology product but it was backed by a very good marketing plan. I know most people do not want to think about marketing here - but the fact is that nobody could really articulate why user-centric identity mattered in a way that the average person cared about. Second - "Facebook Lesson" - Facebook Connect solves a number of interesting problems that is easy for both consumer and service providers. For a consumer it's simple to log-in without any redirects. And while Facebook isn't perfect on privacy - no other major consumer-focused service on the Internet provides as much control about sharing identity information. From a developer perspective it is very easy to implement the SSO and fetch other identity information (if the user has given permission). This could only happen because a major company just decided to make a singular focus to make it happen. Third - "Developers Lesson" -  Facebook Social Graph API is by far the simplest API for accessing identity information which also is another reason why you're seeing such rapid growth in Facebook enabled Websites. By using a combination of URL and Javascript - the power a single HTML page now gives a developer writing Web applications is simply amazing. For example It doesn't get much simpler than this "http://api.facebook.com/mewilcox" for accessing identity. And while I can't yet share too much publicly about the specifics - the social graph API had a profound impact on me in designing our next generation APIs.  Posted via email from Virtual Identity Dialogue

    Read the article

  • iproute2 premptive route creation, i think....

    - by Bryan Hunt
    Firstly: I know could do this the easy way with SSH but I want to learn how to route. I want to route packets back through the same tun0 interface from which they came into my system. I can do it for single routes. This works: sudo ip route add 74.52.23.120 metric 2 via 10.8.0.1 But i'd have to add them manually for each request that came down the pipe I've taken the blue pill and followed the http://lartc.org/howto/lartc.netfilter.html: Netfilter & iproute - marking packets tutorial But it's oriented towards redirecting OUTGOING packets based upon markers What I want is for a packet that comes in via tun0 not to be dropped which is what's happening right now, running scappy or suchlike to receive packets it doesn't seem to be receiving anything. Watching in wireshark I see the initial SYN packets coming in on the tun0 interface but that's as far as it gets without a static route as shown above. Am I nuts?

    Read the article

< Previous Page | 473 474 475 476 477 478 479 480 481 482 483 484  | Next Page >