Search Results

Search found 19796 results on 792 pages for 'bit twiddler'.

Page 410/792 | < Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >

  • SDL: Clipping a Sprite Sheet from Left to Right

    - by 0X1A
    I'm trying to get a sprite sheet clipped in the right order but I'm a bit stumped, every iteration I've tried has tended to be in the wrong order. This is my current implementation. Frames = (TempSurface-h / ClipHeight) * (TempSurface-w / ClipWidth); SDL_Rect Clips[Frames]; for (i = 0; i < Frames; i++) { if (i != 0 && i % (TempSurface-h / ClipHeight) == 0) ColumnIndex++; Clips[i].x = ColumnIndex * ClipWidth; Clips[i].y = i % (TempSurface-h / ClipHeight) * ClipHeight; Clips[i].w = ClipWidth; Clips[i].h = ClipHeight; Where TempSurface is the entire sheet loaded to a SDL_Surface and Clips[] is an array of SDL_Rects. What results from this is a sprite sheet set to SDL_Rects in the wrong order. For example a sheet of dimensions 4x4 would load desirably as this: | 0 | 1 | 2 | 3 | | 4 | 5 | 6 | 7 | | 8 | 9 | 10| 11| | 12| 13| 14| 15| But would be set as this order: | 0 | 4 | 8 | 12| | 1 | 5 | 9 | 13| | 2 | 6 | 10| 14| | 3 | 7 | 11| 15| What should I be doing for these to be set correctly?

    Read the article

  • Phantom Local Disks appearing in my drive list

    - by Paul
    I seem to have several phantom Local Disks mapped to different letters that are of 0 bytes in size. Strangely, they do not show up when I view my drives through Windows Explorer. But if I open an application such as ACDSee Pro or MS Word and then go to open a file I can see all these Local Disks mapped to different letters. This means when I plug in my external hard disk it ends up mapped to letter R instead of its usual G which messes up any programs I have pointing to it by default. How did they get there and more importantly, how do I get rid of them? I'm on a Window 7 Home Premium 32 bit machine.

    Read the article

  • How can find out the device Id of my unmounted DVD?

    - by fred.bear
    When I put a DVD into the DVD drive, it appears in Nautilus Places, but is not automatically mounted. (this is by personal choice). In this unmounted state, mount (of course) reports nothing, and likewise for df.. but Nautilus is aware of the DVD hardware unit and has read the Label; which it shows in Places So it seems to me that Nautilus has already accessed the DVD devices (Did it temporarily mount it?)... The main point of my question was to determine how to find the device Id of an unmounted device .. but as I've been writing this, I now think it may not be as simple as that... This issue came up because I wanted to test this command cat iso-pieces.* | growisofs -Z /dev/dvd=/dev/stdin, but then realized that I didn't know how to get my DVD's device Id. ... and does the above command requires a mounted device, or does it write directly to the device? ... as you can see, I'm a bit vague about devices :) Come to think of it maybe Nautalus read the DVD device directly, because when all is said and done, something has to read/write directly to it. info growisofs says: Under Linux it will most likely be an ide-scsi device such as "/dev/scd0 How can I find this Id via a script?

    Read the article

  • Animating sprites in HTML5 canvas

    - by fnx
    I'm creating a 2D platformer game with HTML5 canvas and javascript. I'm having a bit of a struggle with animations. Currently I animate by getting preloaded images from an array, and the code is really simple, in player.update() I call a function that does this: var animLength = this.animations[id].length; this.counter++; this.counter %= 3; if (this.counter == 2) this.spriteCounter++; this.spriteCounter %= animLength; return this.animations[id][this.spriteCounter]; There are a couple of problems with this one: When the player does 2 actions that require animating at the same time, animation speed doubles. Apparently this.counter++ is working twice at the same time. I imagine that if I start animating multiple sprites with this, the animation speed will multiply by the amount of sprites. Other issue is that I couldn't make the animation run only once instead of looping while key is held down. Someone told me that I should create a function Animation(animation id, isLooped boolean) and the use something like player.sprite = new Animation("explode", false) but I don't know how to make it work. Yes I'm a noob... :)

    Read the article

  • Why are PNG-8 files mangled when opened in Photoshop?

    - by Daniel Beardsley
    Why are some 32 bit PNGs opened in Photoshop with Indexed Colors and no transparency? For instance, I grabbed a png icon file of the Stack Overflow logo at: http://blog.stackoverflow.com/wp-content/uploads/icon-so.png When opening it in Photoshop CS3, it apparently treats it as indexed color and gets rid of the alpha channel. The image on the right is a screen grab of the icon. Changing the Image mode in Photoshop to RGB doesn't change the image at all. I've tried this with a few other PNGs and it seems hit or miss. When viewed in other programs, it displays fine. left:png opened in Photoshop, right:screen grab of png from browser What gives?, does Photoshop not interpret the PNG file format correctly?

    Read the article

  • Windows 7 - XP Mode - Apache

    - by Howard
    I've setup Virtual PC and XP Mode on my Windows 7 Pro. Using Apache 2.0.52 I have no problems having my website up and running on the Windows 7 machine. But Under VPC/XP Mode the best I can do is Localhost mode. What do I need to do to enable http connections? I need the XP Mode as besides the website I also run a Web BBS and a Dos based (via telnet) BBS. Some of the apps in the Dos BBS just won't work under 64 bit, no matter what setting (capability) are used. Thanks in advance...

    Read the article

  • Any good resources on setting up an ubuntu virtual machine for web development?

    - by Relequestual
    I'm currently on my placement year at uni with 4 months left. Before working at my current place, I have not used a nix environment for web development and have used WAMP. Over the past year I have found some very interesting new tech that requires a bit more than my shared hosting even to play with (eg node.js, RoR 3). At work we use a Virtual Machine for development, but that's all been set up and configured to match the live servers, and is managed with a Puppet server. Are there any really good resources for setting up and configuring an Ubuntu VM as a web server? Work currently uses Ubuntu so I would assume this is a good OS to use. I do of course know how to use google, but the noise ratio is just too big, so thought I'd ask here, as I know many of you will have a ton of bookmarks. Cheers.

    Read the article

  • Acrobat Reader, and indeed all Adobe products are freezing and crashing on print

    - by 5tratus
    Everything was working fine, right up till I had to do some driver work to get my scanner to work - now I can't seem to print from any Adobe product. I click print and the program freezes, it stops responding, and in the case of Acrobat Reader, it crashes. In the case of In-design CS4, I have to stop the process in task manager, in the case of Fireworks CS3 - I think it just crashes. Printing a PDF hangs and crashes inside of Firefox and IE browsers too. My printer works and I can print from MS Word, Excel and directly by right clicking on a non-Adobe file and choosing print. But when I try it in an Adobe product. I'm running Windows 7 64-bit, my version of Adobe Reader is: 10.1.11, Windows is updated, and I don't have any unusual extensions.

    Read the article

  • FreePBX: Asterisk in the Cloud (EC2) Audio Problems

    - by neezer
    Please pardon the newbie question, but I can't seem to figure this out. I followed the Voxilla's tut to the tee: http://voxilla.com/2009/10/15/voxill...p-by-step-1457 But in making calls, my softphones connect, yet no audio (in either direction). I know from poking around the forums that this is generally caused by two factors: NAT and audio codecs. I (being new to the arena), however, don't know which. I believe I have Asterisk and the clients restricted to just ulaw, and I also believe I have the correct ports open, and my externip set correctly (I think the Voxilla AMI does this automatically, since it's in the cloud). I'm a bit lost. I'd be happy to post whatever configuration files that might help, provided you tell me where they are on the filesystem. But like I said before, this is effectively a vanilla install of Voxilla's own FreePBX AMI. I'd appreciate any help or guidance here. Thanks!

    Read the article

  • Resume on 30 Days of SharePoint

    Dear readers, as you might have noticed... It was an organisational desaster on my end! Even though I continued my studies and research on Microsoft SharePoint 2013 during the last 30 days, I wasn't able to write an article a day to keep you posted on my progress. Nonetheless, I gathered a good number of additional blogs, mainly SharePoint MVP sites, and online forums which will be helpful in the next couple of weeks while I'm actually going to develop a C#-based client which will enable an existing 'legacy' application to SharePoint as a document management system (DMS) besides other already existing solutions. Finding excuses Well, no. Not really. I simply didn't block any or enough time every day to write down my progress during my own challenge. My log book on learning about SharePoint stands at 41 hours and 15 minutes during this month. Which means that I spent an average of more than 1 hour per day on getting into SharePoint. I know that might sound a little bit low but also keep in mind that I went for the challenge on top of my daily job and private responsibilities. During the same period there had been two priority 0 incidents from clients - external root cause - which took presedence over this leisure project. More to come Anyway, it was a first trial and despite the low level of reporting on my blog, I'm confident about what I learned during the last 30 days, and I'm ready to implement the client's requirements. At least, I would say that I have a better understanding about the road map or the path to walk during the next month. As time and secrecy allows I'm going to note down some bits and pieces... During the process of development, I'm going to 'cheat' on the challenge summary article and add links to those new entries. Just for the sake of completeness. Next challenge? Hmm, there had been ideas during the last meetup of the Mauritius Software Craftsmanship Community (MSCC) regarding certifications in IT and eventually we might organise some kind of a study group for specific exams, most probably Microsoft exams towards MCSD Web Developer or Windows Developer.

    Read the article

  • Are specific types still necessary?

    - by MKO
    One thing that occurred to me the other day, are specific types still necessary or a legacy that is holding us back. What I mean is: do we really need short, int, long, bigint etc etc. I understand the reasoning, variables/objects are kept in memory, memory needs to be allocated and therefore we need to know how big a variable can be. But really, shouldn't a modern programming language be able to handle "adaptive types", ie, if something is only ever allocated in the shortint range it uses fewer bytes, and if something is suddenly allocated a very big number the memory is allocated accordinly for that particular instance. Float, real and double's are a bit trickier since the type depends on what precision you need. Strings should however be able to take upp less memory in many instances (in .Net) where mostly ascii is used buth strings always take up double the memory because of unicode encoding. One argument for specific types might be that it's part of the specification, ie for example a variable should not be able to be bigger than a certain value so we set it to shortint. But why not have type constraints instead? It would be much more flexible and powerful to be able to set permissible ranges and values on variables (and properties). I realize the immense problem in revamping the type architecture since it's so tightly integrated with underlying hardware and things like serialization might become tricky indeed. But from a programming perspective it should be great no?

    Read the article

  • What are some best practices for minimizing code?

    - by CrystalBlue
    While maintaining the sites our development team has created, we have come across include files and plugins that have proven to be very useful to more then one part of our applications. Most of these modules have come with two different files, a normal source file and a min file. Seeing that the performance and speed of a page can be increased by minimizing the size of the file, we're looking into doing that to our pages as well. The problem that we run into is a lot of our normal pages (written in ASP classic) is a mix of HTML, ASP, Javascript, CSS, and include files. We have some pages that have their JS both in include files and in the page, depending on if the function is only really used in that page or if it's used in many other pages. For example, we have a common.js and an ajax.js file, both are used in a lot of pages, but not all of them. As well as having some functions in a page that doesn't really make sense to put into one master page. What I have seen a few other people do online is use one master JS file and place all of their javascript into that, minify it, gzip it, and only use that on their production server. Again, this would be great, but I don't know if that fully works for our purposes. What I'm looking for is some direction to go with on this. I'm in favor of taking all of our JS and putting it in one include file, and just having it included in every page that is hit. However, not every page we have needs every bit of JS. So would it be worth the compilation and minifying of the files into one master file and include it everywhere, or would it be better to minify all other files and still include them on a need-to-use basis?

    Read the article

  • Getting error message when trying to start a virtual machine

    - by Sunil J
    I have been using VMWare on Windows for a long time. But after a long wait, I moved to VirtualBox on Ubuntu 11.10. I installed Ubuntu, 32 Bit, installed all available updates and installed Virtual Box. When I try to create a new Windows installation inside VirtualBox, I got the following error messages. 1st error dialogue VirtualBox - Error Failed to open a session for the virtual machine Windows XP.<br> The virtual machine '**Windows XP**' has terminated unexpectedly during startup with exit code 1.<p> Details<p> Result Code: <br> NS_ERROR_FAILURE (0x80004005)<br> Component: <br> Machine<br> Interface: <br> IMachine {5eaa9319-62fc-4b0a-843c-0cb1940f8a91}<p> 2nd error dialogue **Virtualbox - Error in suplibOsinit**<br> **Kernal driver not installed (rc--1908)**<br> Please install the virtualbox-dkmbs package and execute 'modprobe vboxdrv' as root.<p> Steps I tried I have already tried reinstalling VirtualBox. Google result seem to indicate the the problem happens due to Kernel updates. Is there anyway I can get this working? I need this for malware analysis and if VirtualBox is going to crash on me all the time, then I won't be able to use Ubuntu for work. Output to dpkg -l | grep virtual server rc virtualbox 4.1.2-dfsg-1ubuntu1 x86 virtualization solution - base binaries rc virtualbox-qt 4.1.2-dfsg-1ubuntu1 x86 virtualization solution - Qt based user interface cute 'modprobe vboxdrv' as root.<p>

    Read the article

  • Cherokee high virtual memory usage even after disabling I/O Cache

    - by nidheeshdas
    I've Ubuntu 10.04LTS 64-bit running on a openvz container and Cherokee 1.0.8 compiled from source. The virtual memory usage of cherokee-worker is around 430 MB even after disabling I/O cache from Advanced - I/O Cache - NOT enabled. Is this issue particular to openvz? Because many people reported to have successfully reduced virt memory usage by disabling io cache. htop output: http://imgur.com/z5JEL.jpg (newbies not allowed to post image.) thanks in advance. nidheeshdas

    Read the article

  • How do I write to an outer truecrypt volume when the inner volume protection prevents writng?

    - by con-f-use
    In a nutshell After some time using the outer volume of a hidden volume in Truecrypt I cannot write to the outer volume anymore. The protection of the inner volume always kicks in before. How do I fix this? Details I'm using truecrypt's two layered encryption of a USB stick. The outer container carries my semi-sensitive stuff while the inner hidden values has a bit more valuable information. I use both, the inner and outer volume regularly and that is part of the problem. Truecrypt can mount the outer volume for writing while protecting the inner. Usually the inner volume, when not protected this way (or mounted read-only) would be indistinguishable from free space. That is of course part of the plausible deniability scheme of truecrypt. At the beginning, everything worked as expected. I could copy and delete data to the outer volume as I pleased. Now it seams that I have written and deleted enough data to have filled the outer volume once. Despite the write protection Ubuntu tries now to write to the continuous "free space" that is the inner volume. It does that although enough other free space is on the outer volume. But on this free space there used to be data so its fragmented and the file system write prefers continuous space. The write on the continuous free space of the outer volume of course fails (with the error message in the picture above) as Truecrypt's inner-volume-protection kicks in. The Question I know this is expected behaviour, but is there a better way to write to the outer volume that does not attempt to write to the hidden free space at the end? The whole question could be more generally rephrased to: How do I control, where on a partition data is written in Ubuntu?

    Read the article

  • Firefox loses app tabs when I shut down my computer sometimes

    - by Xitcod13
    When I shut down my computer sometimes my app tabs do not reopen in Firefox and I need to remake all of them which is pretty annoying. I don't know if this is a common bug it might be specific to my computer. I run Vista x64 and it just seems to still have quite a bit of problems. Sometimes when I command it to shut down it does not simply showing shut down loading for hours so I need to manually turn it off. I think that might be messing up Firefox but I'm not sure. I also found a related thread on Bugzilla about this problem but I don't think it's the same issue as it only happens when I shut down my computer.

    Read the article

  • Default maximum heap size -- Ubuntu 10.04 LTS, openjdk6-jre

    - by sachin
    I just installed openjdk6-jre on Ubuntu 10.04 java version "1.6.0_20" OpenJDK Runtime Environment (IcedTea6 1.9.2) (6b20-1.9.2-0ubuntu1~10.04.1) OpenJDK 64-Bit Server VM (build 19.0-b09, mixed mode) Every time I run "java" I get this error: Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. This can be solved by specifying a maximum heap size and running "java -Xmx256m" But is there anyway to permanently fix this error (i.e. set the default heap size to 256MB so that I do not need to specify the max heap size every time I run the command)

    Read the article

  • RAID 1 Mirror Error with Two Western Digital 500GB Drives

    - by bm678
    I have Windows 7 Ultimate 64 bit installed on a Western Digital 500GB drive (WD5000BEVT-22ZAT0) that was partitioned automatically by Windows as 100MB System Reserved and 465.66GB drive C. There is also an unallocated second Western Digital 500GB drive (WD5000BPVT-22HXZT1) that I want to use for RAID 1 to mirror the first drive but I get an error message stating “ALL DISKS HOLDING EXTENTS FOR A GIVEN VOLUME MUST HAVE THE SAME SECTOR SIZE, AND THE SECTOR SIZE MUST BE VALID.” I uninstalled Windows patch KB-982018 but I still get the same error message. Could you please let me know how to resolve this? Thanks.

    Read the article

  • How to convince non-programmer his notions about computers are wrong?

    - by Suma
    Recently I came across a question about 64b on SuperUser for which the accepted answer seemed like a complete nonsense to me. I made two comments pointing out obvious mistakes. To my perplexion, the comments had an effect of the poster of the answer being alienated. I have no idea how could I convince him he is wrong, as he does not seem to understand the basics of the problem. He seems to be mixing concepts like bus size and address size - see the pearl sentence "it will allow you to address all of your RAM because your processor is reading from your RAM in 64 bit words.". The poster asks me to provide proves of my claim by quoting a respectable source, but I have no idea where to find such source, as I doubt anything I would consider relevant would be relevant for him (it would be probably too technical). I think this instance can serve as an illustration of communication problems between programmers and users (and to certain extent even to any expert vs. non-expert communication). How should a programmer handle a communication like this, so that is does not become a useless quarrel?

    Read the article

  • Interactive command to let user change directory in bash

    - by Rich
    I am looking for a CURSES-based way (bash, c, doesn't really matter) of letting a user choose a folder or even a file in roughly the same way that they would do using Midnight Commander. I envisage using up/down for moving the cursor, esc to cancel, and enter to select the item under the cursor. If the item is a file, then return the full path to that file, if the item is a folder, change into that folder. Does anyone know of one that exists? If not, how would I go about writing one? I'm mainly a Java programmer, so I could use JavaCurses, but it feels a bit like overkill.

    Read the article

  • Server downtime - are these APC warnings the cause?

    - by DisgruntledGoat
    Yesterday I had a problem with my dedicated server (Ubuntu 10.04, LAMP). It wasn't down per se, but running incredibly slowly as if we had a massive overload of visitors (though I don't think we did). It's running smoothly again now. I've been checking through log files etc to see if I can find any issues, the only strange thing is a bunch of these errors, occurring at about the same time as the downtime: [apc-warning] Unable to allocate memory for pool. in [file] on line 49. And a bit later on: [apc-warning] GC cache entry '[file1]' (dev=2056 ino=8988092) was on gc-list for 3601 seconds in [file2] on line 746. Could these errors indicate the cause of the server slowdown, or are they simply a result of the server being slow in the first place? What would be the solution?

    Read the article

  • How can I measure TCP timeout limit on NAT firewall for setting keepalive interval?

    - by jmanning2k
    A new (NAT) firewall appliance was recently installed at $WORK. Since then, I'm getting many network timeouts and interruptions, especially for operations which would require the server to think for a bit without a response (svn update, rsync, etc.). Inbound SSH sessions over VPN also timeout frequently. That clearly suggests I need to adjust the TCP (and ssh) keepalive time on the servers in question in order to reduce these errors. But what is the appropriate value I should use? Assuming I have machines on both sides of the firewall between which I can make a connection, is there a way to measure what the time limit on TCP connections might be for this firewall? In theory, I would send a packet with gradually increasing intervals until the connection is lost. Any tools that might help (free or open source would be best, but I'm open to other suggestions)? The appliance is not under my control, so I can't just get the value, though I am attempting to ask what it currently is and if I can get it increased.

    Read the article

  • Is there any chance that my data will get silently corrupted with a robocopy SMB network transfer?

    - by Archagon
    I'm setting up a NAS box for the first time. At the moment, I have most of my data backed up to a few local hard drives, and I intend to transfer all the data to my NAS over ethernet once the RAID array is setup. Since this is all happening over the network, I'm a bit worried about my data getting corrupted silently during transfer. From what I understand, data generally doesn't get corrupted without notice on local transfers because a checksum is performed at some point by the drive or the OS. (This could be totally wrong.) Does the same thing happen with SMB, or is it up to the transferrer to check the integrity of their data? And if it doesn't happen with SMB, is there a protocol that does ensure data integrity? I know that rsync can checksum a transfer, but I'm on Windows and I already have a robocopy configuration that I like. Will my data be safe or do I have to use an external checksum tool to make sure?

    Read the article

  • Information about rendering, batches, the graphical card, performance etc. + XNA?

    - by Aidiakapi
    I know the title is a bit vague but it's hard to describe what I'm really looking for, but here goes. When it comes to CPU rendering, performance is mostly easy to estimate and straightforward, but when it comes to the GPU due to my lack of technical background information, I'm clueless. I'm using XNA so it'd be nice if theory could be related to that. So what I actually wanna know is, what happens when and where (CPU/GPU) when you do specific draw actions? What is a batch? What influence do effects, projections etc have? Is data persisted on the graphics card or is it transferred over every step? When there's talk about bandwidth, are you talking about a graphics card internal bandwidth, or the pipeline from CPU to GPU? Note: I'm not actually looking for information on how the drawing process happens, that's the GPU's business, I'm interested on all the overhead that precedes that. I'd like to understand what's going on when I do action X, to adapt my architectures and practices to that. Any articles (possibly with code examples), information, links, tutorials that give more insight in how to write better games are very much appreciated. Thanks :)

    Read the article

  • No sigal showing on LCD TV when Conects Leptop via VGA Cable [migrated]

    - by Amit Prajapati
    I am trying to connect my laptop to Samsung LCD TV by VGA TO HDMI cable My Laptop find Samsung tv on display setting. But When I press fn+F7 key my TV display No Signal My laptop specifications are: "Lenovo R61 ThinkPad, Model: 8935AE7 Window7 Ultimate 32 bits 2GB RAM VGA Port available No HDMI Port My TV specification are: Samsung LCD 26 HDMI Port available VGA Port Available I want to know what is problem? When I connect Another Dell Laptop (Window7 32 bit) with HDMI to HDMI cable it work properly. Thanks in Advance!

    Read the article

< Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >