Search Results

Search found 18677 results on 748 pages for 'current'.

Page 362/748 | < Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >

  • How to quickly save what is currently shown in cmd.exe to a file

    - by Zeiga
    I am asking if there is a quick way/command to save the current standard output from cmd.exe or powershell to a file. For example, I have run a bunch of commands in cmd.exe which generating like hundreds of lines of standard output. Ideally, I am looking for a single command to do "select all" and save to a file automatically. Note: I've read this. But I don't want to change my original commands, so "" or "" redirection cannot be used in this scenario. Thanks.

    Read the article

  • AndEngine GLES2 - getting Black screen on emulator 4.1

    - by dizworld.com
    I'm new in andengine . I create following code public class MainActivity extends BaseGameActivity { static final int CAMERA_WIDTH = 800; static final int CAMERA_HEIGHT = 480; public Font mFont; public Camera mCamera; //A reference to the current scene public Scene mCurrentScene; public static BaseActivity instance; public EngineOptions onCreateEngineOptions() { instance = this; mCamera = new Camera(0, 0, CAMERA_WIDTH, CAMERA_HEIGHT); return new EngineOptions(true, ScreenOrientation.LANDSCAPE_SENSOR, new RatioResolutionPolicy(CAMERA_WIDTH, CAMERA_HEIGHT), mCamera); } @Override public void onCreateResources(OnCreateResourcesCallback arg0) throws Exception { mFont = FontFactory.create(this.getFontManager(),this.getTextureManager(), 256, 256,Typeface.create(Typeface.DEFAULT, Typeface.BOLD), 32); mFont.load(); } @Override public void onCreateScene(OnCreateSceneCallback arg0) throws Exception { mEngine.registerUpdateHandler(new FPSLogger()); mCurrentScene = new Scene(); Log.v("Scene","enter"); mCurrentScene.setBackground(new Background(0.09804f, 0.7274f, 0.8f)); // return mCurrentScene; } @Override public void onPopulateScene(Scene arg0, OnPopulateSceneCallback arg1) throws Exception { // TODO Auto-generated method stub } } I got code on sites there is returning scene but in AndEngine GLES2 in method onCreateScene() there is no return scene ... so my First run is BLACK .. any suggestion :)

    Read the article

  • How to make nginx only respond to one domain?

    - by larryzhao
    I am pretty new to nginx, I host my rails application on nginx+passenger. I want my website to be accessible to only one domain. So I set my nginx conf like the following: server { listen 80; server_name mydomain.com www.mydomain.com; root /var/deploy/myapp/current/public; passenger_enabled on; location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires 1y; add_header Cache-Control public; } } I specify the server_name directive, but still, it answers anything which points to this IP and I could see that in the access.log that it answers to other domain names. Is there anything I am doing wrong?

    Read the article

  • Can't boot into XP after setting up dual boot with Win 7 vhd.

    - by bebop
    I set up a dual boot for win 7 from vhd on 2 xp machines. One of them everything went fine, and I get the option to choose the os when I turn the machine on. The other was slightly different in that rather than seeing the current OS disk (the one with XP on) as the c:\ when I was setting up windows 7 vhd during install it saw the disk as d:. I didn't think anything of it and went ahead and created a vhd on the d: drive. Now when I turn this machine on, it boots straight to win7 and I never get the option to choose xp. When I look at the boot option in msconfig, I only see Windows 7. How can I go about adding the old XP drive as a boot option at startup again? Edit: Strangely when I rebooted this time (perhaps the first time since I removed the install DVD) it boots to XP. I suppose I'll just have to reinstall windows 7 again in a new vhd...

    Read the article

  • License validation and calling home

    - by VitalyB
    I am developing an application that, when bought, can be activated using a license. Currently I am doing offline validation which is a bit troubling to me. I am aware there is nothing to do against cracks (i.e modified binaries), however, I am thinking to trying to discourage license-key pirating. Here is my current plan: When the user activates the software and after offline validation is successful, it tries to call home and validate the license. If home approves of the license or if home is unreachable, or if the user is offline, the license gets approved. If home is reached and tells the license is invalid, validation fails. Licensed application calls home the same way every time during startup (in background). If license is revoked (i.e pirated license or generated via keygen), the license get deactivated. This should help with piracy of licenses - An invalid license will be disabled and a valid license that was pirated can be revoked (and its legal owner supplied with new license). Pirate-users will be forced to use cracked version which are usually version specific and harder to reach. While it generally sounds good to me, I have some concerns: Users tend to not like home-calling and online validation. Would that kind of validation bother you? Even though in case of offline/failure the application stays licensed? It is clear that the whole scheme can be thwarted by going offline/firewall/etc. I think that the bother to do one of these is great enough to discourage casual license sharing, but I am not sure. As it goes in general with licensing and DRM variations, I am not sure the time I spend on that kind of protection isn't better spent by improving my product. I'd appreciate your input and thoughts. Thanks!

    Read the article

  • 3D physics engine for accurate collision handling on desktop/laptop computers (non-console)

    - by Georges Oates Larsen
    What are your suggestions for a physics engine that satisfies the following criteria? Capable of calculating collisions between multiple concave mesh-based colliders Handles many collisions going on at once (for instance one mesh being wedged between two others, which themselves may be wedged between two meshes) Does not allow for collider passthrough, even at high speeds. For instance, if I am applying force to a programmatically hinged object that makes it spin, I do not want it to pass through another rigidbody that it collides with while spinning. I have this problem using PhysX As implied before, reacts well to hinged objects, preferably has its own implementation of a hinge, but I am willing to program my own. The important part is that it has some sort of interface that guarantees accurate collision tracking even when dealing with these things Platform independent -- runs on mac as well as PC, also not tied down to specific graphics cards I think that's the best way to explain what I am looking for. Basically, I need SUPER reliable collisions. Something that can't be accomplished with a simple ray casting approach that sends a ray from the last position of the object to the current position (as this object may be potentially large and colliding with small objects via rotation) Bonus points for also including an OPEN SOURCE engine.

    Read the article

  • Wi-fi signal with keeping the internet cable

    - by daGrevis
    So the situation is that I have Ethernet cable which provides internet to my computer. Thing I want is to have wi-fi connection in my house and Ethernet cable (like I have before) to use for my PC. I will use wi-fi for my laptop and mobile phone. I think I need router for that and I'm looking at Asus RT-N16 (suggested in Coding Horror) for it, but I am not sure. Is it the right thing for me and I will be able to get wi-fi signal and keep the Ethernet cable? I guess the system will be that current cable goes into router, router provides wi-fi signal and gives back new cable... or something like that. Thanks in any advice! And sorry if topic isn't in right site.

    Read the article

  • Hard time installing Ubuntu

    - by Nick
    I have a MSI GT780DXR that currently is booting windows 7. I've been trying to dual boot windows 7 and ubuntu for some time now. Here's specs that I think would make a difference Windows 7 500GB*2 RAID 0 hard drives. (Hardware RAID I'm not sure if it's a dedicated RAID card though) 7200RPM Nvidia GT570M Background: I tried to install 12.04 (64 bit) a few times but the Desktop live cd and pendrive boots with a black screen. I've tried wubi but it boots to a black screen as well. I then tried the alternative 12.04 (64 bit) and went through the installation all the way til partitioning. I let Ubuntu notice the raid setup and I setup my swap, /, and home drives, I used my free space to create the three partitions. I tried to resize the windows drive and it told me I couldn't and to be happy with my current setup. When I finally got past I got an error on installing GRUB 2 and decided to skip it and continued on to finish installation. When I tried to boot up I got an invalid partition table error. Windows recovery disc, and a GPARTED live cd couldn't find any hard drives. I ended up following advice and typed this into the recovery command prompt. bootrec /fixmbr bootrec /fixboot bootrec /rebuildBcd It worked and here I am now. The question is, how would I be able to dual boot windows 7 and Ubuntu 12.04 with this information? Thanks,

    Read the article

  • Caching by in-memory dictionaries. Are we doing it all wrong?

    - by user73983
    This approach is pretty much the accepted way to do anything in our company. A simple example : when a piece of data for a customer is requested from a service, we fetch all the data for that customer(relevant part to the service) and save it in a in-memory dictionary then serve it from there on following requests(we run singleton services). Any update goes to DB, then updates the in memory dictionary. It seems all simple and harmless but as we implement more complicated business rules the cache gets out of sync and we have to deal with hard to find bugs. Sometimes we defer writing to database, keeping new data in cache till then. There are cases when we store millions of rows in memory because the table has many relations to other tables and we need to show aggregate data quickly. All this cache handling is a big part of our codebase and I sense this is not the right way to do it. All of this juggling adds too much noise to the code and it makes it hard to understand the actual business logic. However I don't think we can serve data in a reasonable amount of time if we have to hit the database every time. I am unhappy about the current situation but I don't have a better alternative. My only solution would be to use NHibernate 2nd level cache but I have nearly no experience with it. I know many campanies use Redis or MemCached heavily to gain performance but I have no idea how I would integrate them into our system. I also don't know if they can perform better than in-memory data structures and queries. Are there any alternative approaches that I should look into?

    Read the article

  • Queueing up character actions

    - by TheBroodian
    I'm developing a 2D platformer with action-fighter elements. Currently things are working relatively smoothly but I'm having difficulty sorting something out. For the time, keeping my character's states and actions separated and preventing them from stepping on each others' toes is working out well and properly, but I would like to add a feature to my character to get him to behave a little bit more fluidly for the player. At the moment, he has numerous attacks and abilities that he can execute, all of them being executed with button presses. Here lies the problem: Being as everything is executed through button presses, while an action is in progress I flag the game to disregard further button presses until the action has completed. Therefore, consecutive actions cannot be performed until after the previous action has completed entirely. In runtime this behavior feels very icky, and very ungamelike. In games that rest most memorably at the forefront of my mind the player is able to execute button commands during the process of actions, and at the end of the current action, the following action is executed (seems like some sort of a queue system or something) Can anybody offer any guidance with this?

    Read the article

  • email archive for multiple users

    - by evanmcd
    Hi, I'm moving a web site from one server to another, and am realizing that I need to move the name servers for the domain as well (they are set to the current host, not to the registrar). So, knowing that email services will stop as soon as I switch the DNS, I'm scrambling to figure out how to archive and make available email data for folks that have mostly been using webmail for the past few years, and may not even have a computer on which to install a client to download the mail to. What does one do in this situation? Thanks for any help offered! Evan

    Read the article

  • Hosting files with support for file tagging / keywords

    - by Zev Chonoles
    I have a large (approx. 25GB) collection of files I would like to host online for people to view or download. I have a spare computer I can use as a dedicated server for these files. I'm looking for a method of, or piece of software for, hosting my files where I can assign tags or keywords to the files, and people viewing my files online can search the collection via the tags. By way of approximate solutions I've found so far, I see that there is software such as Collectorz.com or Readerware for creating databases of one's books / music / movies, and these databases can be searched by tags or keywords, and the databases can be made available and searchable online; this would suit my purposes except that my files are not necessarily books, music, or movies, and I want the files themselves accessible online, not a database describing my files. A commercially-available solution like the ones above would be acceptable, but I'd prefer to have the whole setup under my control (i.e. I'd like to either implement it by hand, or use commercial software that doesn't rely on using the company's servers, paying them a continued fee, etc.). The current extent of my internet experience is designing a few Google Sites, so I know there's a fair chance I won't understand the answers I receive, but I'm always happy to have a summer project :)

    Read the article

  • What are the practical limits on file extension name lengths?

    - by GorillaSandwich
    I started using DOS back before Windows, and ever since have taken it for granted that Every file has a file extension, like .txt, .jpg, etc That extension is always short (usually 3 letters) I learned early that the extension is basically just a hint to the OS as to what the content type is. Eventually I got exposed to Mac and Linux, files with no extensions, etc. And of course I've seen shorter extensions, like .rb and .py. I just noticed that markdown-formatted files can have the extension .markdown, and it made me wonder - how long can that extension be? If I make it .mycrazylongextensiontypewoohoo, will certain operating systems or programs choke on the file? Are extension names generally short just for convenience, or is this based on some limitation, legacy or current?

    Read the article

  • How do I share my iPhoto photos with my ubuntu partition?

    - by Taryn East
    I have a MacBook Pro dual-booted with Snow Leopard and Ubuntu Karmic. I have recently imported hundreds of my photos into iPhoto - but I now want to be able to see them (and use them as desktop/screen saver images) from my Ubuntu partition (ie when the machine is running Ubuntu instead of MacOS). Is there an easy way to do this direct from the iPhoto library or do I have to shift them all out to an external file directory or something? Further edit - just to make it clear: I have already uploaded my photos directly into iPhoto - then spent many days categorising, tagging and uploading to flickr. Unless there's something I'm missing, I'm guessing it's likely too late to do the "don't copy into the iPhoto library" option. Happy to be proven wrong :) Perhaps somebody knows of a way to "export" the library without losing any of the current information - so that I can (from then on) keep the photos in an external library? I don't want to do this, though, if I lose the information that is currently there.

    Read the article

  • Optimising website IP for location

    - by Liam Sorsby
    From my understanding of SEO, websites are optimised for the current location of their IP address. For example if xxx.xxx.xxx.xx resolves to the UK then you are more likely to get higher rankings in the UK then you are in the USA. However, my query is when you use a CDN you are storing a cached version of your website across multiple servers at strategic locations across the globe to reduce load time in locations that your trying to target. Now if you use a CDN and geo-locate the website URL then it only resolves back to the USA (where our IP address resolves too) but it doesn't say it resolves to any other countries. As far as I know you can have multiple IP address resolving to one domain (from different countries). Do CDN's really help to optimise the location of your website or are they soley meant to optimise load time? Is there a better way to optimise for multiple countries with regards to the resolution of the IP address? Are VPN's as per this post here relevant to this? Any advice would be helpful.

    Read the article

  • Dump nginx config from running process?

    - by Sergio Tulentsev
    Apparently, I shouldn't have spent sleepless night trying to debug an application. I wanted to restart my nginx and discovered that its config file is empty. I don't remember truncating it, but fat fingers and reduced attention probably played their part. I don't have backup of that config file. I know I should have made it. Good for me, current nginx daemon is still running. Is there a way to dump its configuration to a config file that it'll understand later?

    Read the article

  • GPU Computing - # of GPUs supported

    - by TehTypoKing
    I currently have a desktop with 6 GPUs ( 3x HD 5970s ) in non-crossfire mode. Unfortunately, it seems that Windows 7 64bit only supports up to 4 GPUs. I have not been able to find a reliable source to deny or confirm this. If windows 7 has this limitation, is there a Linux flavor that supports more than 4 GPUs? In-case you are wondering, this is not for gaming but high-speed single precision computing. With this current setup ( if I can find 6gpu support ) I am looking to reach 13.8 Teraflops. Also, my motherboard does support 3 16x pci-xpress gen2 slots... and I have a 1500w powersupply plugged into a 20amp outlet. Windows is able to detect all 6 cores.. although, 2 of which displays the warning "Drivers failed to load". To recap: - Can windows support 6 GPUs? - If not, does Linux? Thank you.

    Read the article

  • YouTube custom thumbnails feature availability

    - by skat
    I've been trying to figure out this on my own for weeks, but now I give up. 'Custom thumbnails' feature on YouTube is such a controversial one, it was changed so much... so much that even FAQ on YouTube doesn't fully describe it's features (as I see). I have a YouTube channel for one of my websites. This YouTube channel is main marketing force for my website - it brings all the boys to my yard (I mean, website). So I have to use all the hacky-tricky stuff to increase my visibility on youtube. And damn, those custom thumbnails are giving me hard times... As far as I understand, this is current state of 'custom thumbnail' feature: "If your account is in good standing, you may have the ability to upload custom thumbnails for your video uploads." (c) https://support.google.com/youtube/answer/138008 My channel has good standing, has more than 50000 views. So why the hell my account is still not eligible for this feature? anyone have any idea?

    Read the article

  • what to do when ctrl-c can't kill a process?

    - by Dustin Boswell
    Ctrl-c doesn't always work to kill the current process (for instance, if that process is busy in certain network operations). In that case, you just see "^C" by your cursor, and can't do much else. What's the easiest way to force that process to die now without losing my terminal? Summary of answers below: Usually, you can Ctrl-z to put the process to sleep, and then do "kill -9 process-pid", where you find the process's pid with 'ps' and other tools. On Bash (and possibly other shells) you can do "kill -9 %1" (or '%N' in general) which is easier. If Ctrl-z doesn't work, you'll have to open another terminal and kill from there.

    Read the article

  • Add Bookmark to IE automatically for new users on a computer

    - by Kyle Brandt
    When I set up a PC, I would like to be able to have it so when anyone logs into that PC from the domain a couple of IT bookmarks will be in IE. I read I can do this with a Domain-Level group policy, but unfortunately, with my current domain group policies have not gone well, so I have fear (Rather not get into this in this question). Can I do this at the PC level when I deploy a new computer? So any domain users who log into the PC will have these bookmarks added when their profile is created (no roaming profiles). These are XP machines, and the domain is run by 2003 controllers.

    Read the article

  • Reformatting and version control

    - by l0b0
    Code formatting matters. Even indentation matters. And consistency is more important than minor improvements. But projects usually don't have a clear, complete, verifiable and enforced style guide from day 1, and major improvements may arrive any day. Maybe you find that SELECT id, name, address FROM persons JOIN addresses ON persons.id = addresses.person_id; could be better written as / is better written than SELECT persons.id, persons.name, addresses.address FROM persons JOIN addresses ON persons.id = addresses.person_id; while working on adding more columns to the query. Maybe this is the most complex of all four queries in your code, or a trivial query among thousands. No matter how difficult the transition, you decide it's worth it. But how do you track code changes across major formatting changes? You could just give up and say "this is the point where we start again", or you could reformat all queries in the entire repository history. If you're using a distributed version control system like Git you can revert to the first commit ever, and reformat your way from there to the current state. But it's a lot of work, and everyone else would have to pause work (or be prepared for the mother of all merges) while it's going on. Is there a better way to change history which gives the best of all results: Same style in all commits Minimal merge work ? To clarify, this is not about best practices when starting the project, but rather what should be done when a large refactoring has been deemed a Good Thing™ but you still want a traceable history? Never rewriting history is great if it's the only way to ensure that your versions always work the same, but what about the developer benefits of a clean rewrite? Especially if you have ways (tests, syntax definitions or an identical binary after compilation) to ensure that the rewritten version works exactly the same way as the original?

    Read the article

  • Better way to go up/down slope based on yaw?

    - by CyanPrime
    Alright, so I got a bit of movement code and I'm thinking I'm going to need to manually input when to go up/down a slope. All I got to work with is the slope's normal, and vector, and My current and previous position, and my yaw. Is there a better way to rotate whether I go up or down the slope based on my yaw? Vector3f move = new Vector3f(0,0,0); move.x = (float)-Math.toDegrees(Math.cos(Math.toRadians(yaw))); move.z = (float)-Math.toDegrees(Math.sin(Math.toRadians(yaw))); move.normalise(); if(move.z < 0 && slopeNormal.z > 0 || move.z > 0 && slopeNormal.z < 0){ if(move.x < 0 && slopeNormal.x > 0 || move.x > 0 && slopeNormal.x < 0){ move.y += slopeVec.y; } } if(move.z > 0 && slopeNormal.z > 0 || move.z < 0 && slopeNormal.z < 0){ if(move.x > 0 && slopeNormal.x > 0 || move.x < 0 && slopeNormal.x < 0){ move.y -= slopeVec.y; } } move.scale(movementSpeed * delta); Vector3f.add(pos, move, pos);

    Read the article

  • In virtualbox, I can't access the dvd drive to install a guest host

    - by user211062
    I have installed a fresh copy of Ubuntu Server 12.04 and VirtualBox 4.3. I have set up a VM called "MediaServer" and tried to start it. I then get the following error: Cannot open host device '/dev/sr0' for readonly access. Check the permissions of that device ('/bin/ls -l /dev/sr0'): Most probably you need to be member of the device group. Make sure that you logout/login after changing the group settings of the current user (VERR_ACCESS_DENIED) I have looked all over the Internet and have been unable to find a solution. Using Webmin, I tried changing the group settings so that my user name was in the "vboxusers" group, but that did not work either. I tried various other changes in group settings and none of them worked. Also, I tried rebooting the server after the changes and that didn't work either. I have been following a guide on how to set up an Ubuntu server from the website "linuxhomeserverguide.com" and when it came to the section where you could finally set up your first virtual machine, I am stumped. I would really appreciate it if someone could help me. Thanks in advance.

    Read the article

  • How do laptop battery voltages affect runtime?

    - by Bigbio2002
    I ordered a new battery for my faithful XPS M1710. I'm not sure of the voltage of the battery I have now, but the new one that the Dell rep got me (after 3-4 times confirming my phone number and laptop model number) is 14.8v. I was a bit concerned about potential incompatibilities (as most of the other compatible batteries listed were 11.1v), but I figure that there's no way that Dell would "recommend" batteries that wouldn't work or fry your system. Now, my question is, how does voltage affect battery life? If we assume the needed power draw to be constant, a higher voltage would indicate less amperage needed, therefore the battery would last longer before running out, yes? Or am I missing something? For reference: P=I*V P = power I = current V = voltage (duh)

    Read the article

  • How to prevent showing outside of world game in Cocos2D-x

    - by HRZ
    I'm trying to make a tower defense game and it can zoom in/out and scrolling over my world map. How to scroll over the game and how to restrict it not to show outside of my map(black area). At below I scroll over the map by using CCCamera but I don't know how I can restrict it. CCPoint tap = touch->getLocation(); CCPoint prev_tap = touch->getPreviousLocation(); CCPoint sub_point = tap - prev_tap; float xNewPos, yNewPos; float xEyePos, yEyePos, zEyePos; float cameraPosX, cameraPosY, cameraPosZ; // First we get the current camera position. GameLayer->getCamera()->getCenterXYZ(&cameraPosX, &cameraPosY, &cameraPosZ); GameLayer->getCamera()->getEyeXYZ(&xEyePos, &yEyePos, &zEyePos); // Calculate the new position xNewPos = cameraPosX - sub_point.x; yNewPos = cameraPosY - sub_point.y; GameLayer->getCamera()->setCenterXYZ(xNewPos, yNewPos, cameraPosZ); GameLayer->getCamera()->setEyeXYZ(xNewPos, yNewPos, zEyePos); And for zooming I used such code: GameLayer->setScale(GameLayer->getScale() + 0.002); //zooming in

    Read the article

< Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >