Search Results

Search found 21352 results on 855 pages for 'bit shift'.

Page 474/855 | < Previous Page | 470 471 472 473 474 475 476 477 478 479 480 481  | Next Page >

  • One way sync with either Dropbox or Google drive on linux and windows

    - by sup
    I could not google this one. I would like to use Dropbox or Google drive only as backup, so I would like to ensure I only upload to those services and never download (unless done manually via the web interface). There would be several of us uploading stuff to one account, so we would need to ensure we don't accidentaly delete something on another guy's machine by doing some changes on our local machine. Is there a simple way to do this automatically - i.e. by running a daemon that will upload eveyuthing in a given folder but never download anything? I am on linux, so that complicates things a bit, but I am also interested in windows solutions.

    Read the article

  • Is using SVN for development and CM a bad practice?

    - by GatorGuy
    I have a bit of experience with SVN as a pure programmer/developer. Within my company, however, we use SVN as our configuration management tool. I thought using SVN for development at the same time was OK since we could use branches and the trunk for dev, and tags for releases. To me, the tags were the CM part, and the branches/trunk were the dev part. Recently a person, who develops high level code (but outside of the "pure SW" group) mentioned that the existing philosophy (mixing SVN for dev and CM) was wrong... in his opinion. His reasoning is that he thinks the company's CM tool should always link to run-able SW (so branches would break this rule). He also mentioned that a CM tool shouldn't be a backup utility for daily or incremental commits. Finally, he doesn't like the idea of having to jump from revision 143 to 89 in order to get a working copy... and further that CM tools shouldn't allow reversion to a broken state. In general he wants to separate the CM and back-up/dev utilties that SVN offers. Honestly, I am new and the person with this perspective is one of seniority, experience, and success, so I want to field this dilemma with the stackoverflow userbase to see if his approach has merit. My question: Should SVN be purely used for development, and another tool for CM (or vice versa)? Why? If so, what tools would you suggest for this combo? Or do you think that integrating both CM and dev into SVN is the best approach? Why? Thanks.

    Read the article

  • IIS not using available memory?

    - by Herb Caudill
    Recently launched an ASP.NET site running on a single 32-bit WS2003 box (SQL on a separate server). The server has 4GB intalled, 3GB available. According to task manager, the w3wp.exe process is only using between 200-600MB. The site has tens of thousands of pages and makes heavy use of page output caching, so I would expect it to use a lot more of the available memory. The app pool isn't set to throttle memory usage. Is there anything else that might be limiting the amount of memory that IIS takes?

    Read the article

  • What is better for the overall performance and feel of the game: one setInterval performing all the work, or many of them doing individual tasks?

    - by Bane
    This question is, I suppose, not limited to Javascript, but it is the language I use to create my game, so I'll use it as an example. For now, I have structured my HTML5 game like this: var fps = 60; var game = new Game(); setInterval(game.update, 1000/fps); And game.update looks like this: this.update = function() { this.parseInput(); this.logic(); this.physics(); this.draw(); } This seems a bit inefficient, maybe I don't need to do all of those things at once. An obvious alternative would be to have more intervals performing individual tasks, but is it worth it? var fps = 60; var game = new Game(); setInterval(game.draw, 1000/fps); setInterval(game.physics, 1000/a); //where "a" is some constant, performing the same function as "fps" ... With which approach should I go and why? Is there a better alternative? Also, in case the second approach is the best, how frequently should I perform the tasks?

    Read the article

  • 12.04 Taking forever for no apparent reason

    - by Sam
    First off, I'd just like to say how much I'm loving Ubuntu so far. It's one of my first steps into Linux, but so far it's blown away Windows in almost every regard. Now, onto my problem. My brothers have some old Dell Dimensions that are barely clinging to life. They were both running XP Home Premium. The 2400 installed just fine, no issues at all. Now, the other computer, the 3000 isn't clearing the screen with the Ubuntu logo and the pulsing dots. I've tried multiple discs, including the exact same one that gave me no issue in the other computer, and I'm at a loss. Does anyone have any suggestions as to where the problem might lay? They're both 32 bit Intel processors, and it's the correct version of Ubuntu. Is it a bad disc drive? Hardware incompatibility? Thanks for any assistance that can be provided. Dell Dimension 2400 Dell Dimension 3000

    Read the article

  • MS Marketing Strategy

    - by Aaron Kowall
    I found this week’s Windows Phone 8 event interesting.  Not just because it looks like some fantastic new features in the new OS but because of the wait for release.  If I were a Nokia shareholder (which I am not) I’d be very unhappy with MS announcing that Windows Phone 8 will NOT work with current hardware.  So, there are some very nice Lumia devices that are now end-of-life that have arrived relatively recently at carriers and retailers. I understand that MS needs to demonstrate progress against iOS and Android and that there is some Windows 8 tie-in that they are trying to capitalize (and MS IS still all about Windows).  However, it’s a bit of a kick to partners that have invested in the platform with pretty decent devices (Samsung, HTC and of course Nokia). Personally, I’m still using a Samsung Foucs.  I was seriously considering upgrading to a Lumia 900 (we just got Lync mobile available) but will now wait it out until new devices arrive with Windows 8.  If MS had waited to announce, I would happily have upgraded to the Lumia and when I found out it couldn’t be upgraded then that would be a gamble I took and lost and I’d live with it.  Now, however, I can see the future and know that waiting is the better option for me so that is 1 sale Nokia will miss out on.  Based on some chats I’ve seen on mobile forums I’m certainly far from the only one. I’m sure glad I’m not in charge of marketing at MS.  There are tough decisions to be made there and I’m pretty sure you piss somebody off regardless. Technorati Tags: WP8,Lumia,Nokia,Samsung

    Read the article

  • English major new to programming. What language should I learn first? [closed]

    - by PJKaka
    After working extensively an internet startup in a marketing positions, I've decided to wade into the entrepreneurship pool with a startup of my own. The only problem: I don't have any particular technical skills to speak of. Although I can find a technical co-founder, I'd rather not be the stereotypical 'business guy' drumming his fingers on the desk and asking 'how much longer?' as my technical co-founder codes away. I would like to understand code and what's happening in the backend, even if I don't end up being anything more than a 'passable' programmer. With this in mind, which language should I try to learn first? For the record, I'm quite proficient with HTML, CSS, and a bit of JavaScript. I have some familiarity with PHP because I've toyed around with WordPress a lot, but my knowledge is limited at best. My math skills are quite strong. I took some advanced calculus courses in college since I've always enjoyed the subject. While my goals are to learn web development, I wouldn't mind learning some hardcore object oriented programming skills in C or Java as well.

    Read the article

  • LDAP authentication: Windows Server2k3 vs. 2k8

    - by wolfgangsz
    We have around 70% linux users, all of which are configured to authenticate against Active Directory through LDAP. In order for this to work, we used the "Windows Services for Unix" under Windows Server 2003, and it all works fine. We are now at a point where the server running this contraption is getting a bit tired and will be replaced with a newer machine, running Windows Server 2008 (where the relevant services such as user name mapping and password changes, etc., are integrated with the OS). And here's the rub: If a new user is configured through the Win2k3 server, then it all works fine. If the same thing is done through the Win2k8 server, then : The ADS plugin on the 2k3 server does not recognize it and behaves as if the UNIX attributes were never set. The user cannot authenticate against ADS using LDAP. Has anybody encountered this problem? If so, how did you overcome this? If you need any additional information to provide further help, just ask and I shall provide it.

    Read the article

  • SharePoint 2010 MySites - Simple explanation needed!

    - by Chris W
    I've been playing around with the 2010 beta for a couple of weeks, experimenting with topology options etc. I think I've got myself totally confused as to how it works hence if there's any SP experts out there that can explain things in simple terms for me I'd appreciate it! I want to setup a farm with 3 servers providing the content & MySites. I presume that the way to do this is to load balance or DNS round robin traffic between the 3 servers. The bit where I'm confused is that My Site Settings page asks for a specific My Site Host hence all my site traffic will be pushed to a single server even though we have 3 in the farm. If this hosts fails I presume MySites will be unavailable. Is this right? How do I configure it so that access to MySites is load balanced across the 3 servers in the farm?

    Read the article

  • nginx + reverse proxy question

    - by Joe Pilon
    Hello, I am using nginx right now for our production sites with the reverse proxy to apache that's on the same server and it works fantastic. I'm wondering if I can do this: Install nginx on box #1 in say Canada and have it reverse proxy http requests to box #2 in a datacenter in the USA. I know there may be some latency or delays in loading the page etc but that would probably be not noticable to the end user especially if both servers have 100mb ports. Box #2 only does the apache requests, all images are served from box #1 via nginx. Now, would the end visitor be able to tell in any which way that there are 2 boxes being used? Box #2 has sensitive data which we can't have stolen in the event of hacking etc, so this method helps keep things a bit more secure. Anyone know if this is possible or have done something similar?

    Read the article

  • Unable to access my own websites from our home

    - by user2521866
    Not sure if this is the right place to ask, but i'm gonna have a shot at it. I host a couple of websites with a webhost in the Netherlands. For a couple of days now, i've been unable to connect to them from my home network. When using tracert in the cmd screen, i'm getting a timeout after about 4-5 hops. It seems fine accessing the website from anywhere, except my home network. Other PC's around the house also fail to open the websites. I've tried 'flushing my DNS' as seen in some other topics, but to no avail just yet. One of the websites: http://bit.ly/1hbqs4J I've contacted my host about it as well, but no response yet. Trying to take control of the situation myself now for as much as I can. Regards, Dave

    Read the article

  • FreePBX: Asterisk in the Cloud (EC2) Audio Problems

    - by neezer
    Please pardon the newbie question, but I can't seem to figure this out. I followed the Voxilla's tut to the tee: http://voxilla.com/2009/10/15/voxill...p-by-step-1457 But in making calls, my softphones connect, yet no audio (in either direction). I know from poking around the forums that this is generally caused by two factors: NAT and audio codecs. I (being new to the arena), however, don't know which. I believe I have Asterisk and the clients restricted to just ulaw, and I also believe I have the correct ports open, and my externip set correctly (I think the Voxilla AMI does this automatically, since it's in the cloud). I'm a bit lost. I'd be happy to post whatever configuration files that might help, provided you tell me where they are on the filesystem. But like I said before, this is effectively a vanilla install of Voxilla's own FreePBX AMI. I'd appreciate any help or guidance here. Thanks!

    Read the article

  • Client/Server game even in solo: any big problem?

    - by Klaim
    I'm making a game which have strong basic design based on multiplayer but also should provide a really interesting and self-sufficient solo game. A bit like a real-time strategy game. The events and actions taken shouldn't be as massive and immediate as in a FPS, so you can also think the networking like for an RTS. It's a PC game, targetting Windows, MacOSX and Linux (Ubuntu & Fedora). It's programmed in C++, using a variety of open source libraries, so I have great (potential) control over the performances. So far I always considered that just making the game work with two applications, client & server, even in solo mode was ok. However, as I'm in the process of starting the network code I'm having doubts about if it's a good idea. I'm not a specialist so I might be missing something in my analysis. I see these pros and cons: Pros: The game works only one way so if I fix a bug it should apply on all game modes, whatever the distance with the server is; Basic networking issues would be detected early, including behaviour with the protection softwares (firewall) installed (i am not specialist so this might be wrong); Cons: I suppose that even if it should be really fast enough, networking client and server on the same computer would still be slower than no networking and message passing in (one) process memory. Maybe debugging would be more difficult? I don't have experience in this case but so far I assume that debugging with Visual Studio allows me to debug multiple process so it shouldn't be really different. Also, remote debugging. My question is: is there a big disadvantage that I missed? Or maybe there are advantages that I missed and that should encourage me to just continue with only client-server game sessions?

    Read the article

  • What are the common Linux commands for SAN-related activities? How do I check if a LUN is attached to the computer?

    - by Nishant
    How do I check if a LUN has been presented to my server? What are the Linux commands for that? Do the LUNs show up in a fdisk -l command like a normal /dev/sda gets listed? What are other commands associated with general SAN related checks in Linux? What is WWN and how does that have any relevance? If we have LUNs, what is the use of multipathing? Bit lengthy but I am not able to get a grasp on the topic. Any help would be appreciated.

    Read the article

  • 301 redirect from HTTP to HTTPS - how to be sure Google is fetching the correct information?

    - by user33692
    I'm hoping somebody might be able to provide a bit of advice on an issue I am having. I have one site where we implemented a 301 redirect on the homepage from HTTP to HTTPS. We have links on the homepage to other parts of the site that are not under SSL (in fact there is only one other page under SSL). When I go to our Webmaster Tools account I notice that we are not being provided with any webmaster information (e.g., search queries, backlinks, etc...) related to our homepage under SSL. I performed a Fetch as Google on the homepage and the information it returned is: HTTP/1.1 301 Moved Permanently Date: Fri, 08 Nov 2013 17:26:24 GMT Server: Apache/2.2.16 (Debian) Location: https://mysite.com/ Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 242 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="https://mysite.com/">here</a>.</p> <hr> <address>Apache/2.2.16 (Debian) Server at mysite.com</address> </body></html> I am worried by the fact that Google fetch is not getting the correct Title tags and Meta information from our homepage and that this is hurting our search results. Additionally, I am worried that we need to do something specific with the sitemap to ensure that Google is correctly indexing all our pages and being able to flow from the HTTPS to the HTTP without issues. Does anybody have any advice on how we can correctly set this up or be sure that Google is fetching the correct information?

    Read the article

  • Permission denied on network share

    - by Philipp
    i have a Windows 8 host system running a virtual(hyper-v) Debian6 client with an lamp environment. My development environment runs under Windows and I mapped the folder with my php files to a network drive so Apache has access to them.(mount.cifs //pc/share /var/share/) This far no problems - I see my app on windows in the browser. The problem is, I can't write stuff in php to the share folder - everytime i got a permission denied message in my error logs. For testing purpose i tried to change the directory permissions of /var/share with chmod -R 777 /var/share without success. Now Iam a little bit stumped.. has anyone an idea how to solve this?

    Read the article

  • Ternary and Artificial Intelligence

    - by user2957844
    Not much of a programmer myself, however I have been thinking about the future of AI. If a fully functional AI is programmed in a binary environment as is used in current computing, would that create a bit of a black and white personality? As in just yes/no, on/off, 1/0? I will use the Skynet computer from the Terminator series as a bad analogy; it is brought online and comes to the conclusion that humanity should just be destroyed so the problem is resolved, basically its only options were; fire the missiles or not. (The films do not really go into what its moves would be after doing such a thing, but that goes into the realms of AI evolution so does not really fit with this question.) It may also have been badly programmed. Now, the human mind has been akin to a ternary system which allows our "out of the box" thinking along with all the other wonderful things our minds can do. So, would it not be more prudent to create a functional ternary system and program an AI using it so the resulting personality would be able to benefit from the third "maybe" (so to speak) option? I understand that in binary there are ways to get around the whole yes/no etc. way of things, however the basic operations are still just 1's and 0's. Again with using the above bad Skynet analogy; if it could have had that third "maybe" option as part of its core system, it may have decided to not launch due to being able to make sense of the intricacies of human nature and the politics of such a move etc. In effect, my question is; Would an AI benefit more from ternary computing as opposed to binary due to the inclusion of -1, or 2, dependent on the system ("maybe," as I call it)?

    Read the article

  • Let's do the Time Warp again!

    - by Mike Dietrich
    Once you start reading about Daylight Saving Time changes in MyOracleSupport you'll find still a lot of notes explaining this and that and back and forth. But sometimes there seems to be a bit too much information - and lacking clear instructions. Once a customer called that the "Time Zone Spaghetti" after reading MOS notes about DST for several hours ending up with the note where he has begun to read before still not clear what to do now I'm using usually the scripts from MOS Note:977512.1 as you'll just have to exchange the DST version you are upgrading to and it has everything you need to check and adjust the time zone data in the database - for instance after applying the DST V18 patch to your database's homes. As a reminder to myself when traveling I have stored a copy of the script part of that note here - and please note that this is not an official Oracle version. Always read and check the original MOS Note:977512.1 as it may have gotten changed in between and may contain changes or corrections and as it has a lot of more explainationary information than I could cover here. And credit to Gunter Vermeir from Oracle Support, who is the owner of that MOS Note and has compiled all that useful stuff together. DST_prepare.sql DST_adjust.sql

    Read the article

  • X server not starting up after new kernel compilation

    - by tech_learner
    I have compiled the Kernel on my 64-bit Debian XPS Studio 1340 Dell system. srikanth@debian:~ - 05:40:52 PM - $ uname -a Linux debian 2.6.32-5-amd64 #1 SMP Thu Mar 22 17:26:33 UTC 2012 x86_64 GNU/Linux Kernel version that I have used and compiled from kernel.org is 2.6.35.13 I have nvidia installed on old kernel. I got the old config and I used the same config to compile the new kernel. Everything went well and I got two debian packages ( image and headers ) which I have installed on my system. When I select the new kernel on the boot menu and I go into it, the X server is not starting up possibly because I have to "rebuild" ( not sure how to do that ) according to this link: http://www.linuxquestions.org/questions/slackware-14/x-server-not-starting-after-kernel-compilation-605265/ Can you suggest how to do the rebuild on nvidia module so that I can start x ( without seeing any blank screen or error saying nvidia module is missing ) ? PS: The link that I have used to compile the kernel is https://help.ubuntu.com/community/Kernel/Compile#Alternate_Build_Method:_The_Old-Fashioned_Debian_Way

    Read the article

  • What does the term 'overinstallation' mean?

    - by Kent Pawar
    I came across this term here: "11882875 -- Essbase Server does not start after an overinstallation." I know that a clean install is a software installation in which any previous version is removed. Googling 'overinstallation' turned up nothing, and I don't like to just assume it simply means 're-install'. UPDATE: So my understanding now is - the term "re-install" can be a bit ambiguous as it could either signify an installation after an uninstallation or otherwise. On the other hand the term "over-installation" specifically talks about installing something over an existing installation, that involved no uninstallations.

    Read the article

  • Is it possible to re-create the Windows 8.1 install image after the upgrade

    - by rossmcm
    I have downloaded and installed the Windows 8.1 upgrade from the Windows store. The hardware was a 64-bit Toshiba P50 laptop. I need to upgrade a second P50 and wish to do so without another 3.6Gb download (I tried the instructions here but never got the chance to create the installation media, nor was I asked for a product key). I saw mention on superuser of creating USB rescue media after installation and using that to clone the upgrade onto another machine. Is this likely to be a viable option?

    Read the article

  • can server 2008's task scheduler run a php file?

    - by rg89
    Hello. I have a server 2008 64 bit machine with php5 via fastcgi installed. I want to run a .php script every day at 3 AM. I set up a task and "Last Run Result" says "%1 is not a valid Win32 application" The event properties describe more failure: "Task Scheduler failed to launch action "D:\InetPub\tools\something\build.php" in instance "{88cc01f4-9554-4b8f-9836-34d806337d7f}" of task "\Something". Additional Data: Error Value: 2147942593." Task Category: Action failed to start Is it possible to run scripts using the task scheduler? If not, how should I go about automating the execution of a php script? Thanks

    Read the article

  • Installing Solaris 10 on sunT5220 - ZFS/UFS raid 10?

    - by Matthew
    I am in a bit of a time crunch, and need to get two T5220's built. We were very happy to see two boxes in our aged inventory which had 8 HDD's each, but didn't think to check if they were running hardware RAID or not. Turns out that they aren't. When we install, we are given the option to use UFS or ZFS, but when we select a place to install we're only given the option of installing on one single disk. Is it possible to create a software raid 10 across all of the disks and install the OS on that? Sorry if any lingo is wrong, I'm not really a Sun guy and our guru is out of town right now. Any help would be really appreciated! Note: Most of the guides I've found on google entail installing the OS on a single disk, and then creating a separate RAID 10 on other disks. We would actually like the OS to reside on the RAID 10. Hope that clarifies things.

    Read the article

  • Interactive command to let user change directory in bash

    - by Rich
    I am looking for a CURSES-based way (bash, c, doesn't really matter) of letting a user choose a folder or even a file in roughly the same way that they would do using Midnight Commander. I envisage using up/down for moving the cursor, esc to cancel, and enter to select the item under the cursor. If the item is a file, then return the full path to that file, if the item is a folder, change into that folder. Does anyone know of one that exists? If not, how would I go about writing one? I'm mainly a Java programmer, so I could use JavaCurses, but it feels a bit like overkill.

    Read the article

  • asus 1215p cannot get the 1366 x 768 resolution

    - by Arthur
    Hi everyone, im a little bit stuck. I had a windows 7 starter installed on this netbook, it wasn't that great so I installed windows 7 ultimate. everything is OK apart from the screen resolution. it doesn't let me choose the optimal 1366x768 resolution and defaults to lower quality, in fact it doesn't even list it. I have tried drivers from Microsoft, Asus and Intel and still no joy. Any suggestions? It has the Intel 3150 Graphics Media Accelerator Much appreciated :)

    Read the article

< Previous Page | 470 471 472 473 474 475 476 477 478 479 480 481  | Next Page >