Search Results

Search found 23627 results on 946 pages for 'alter script'.

Page 688/946 | < Previous Page | 684 685 686 687 688 689 690 691 692 693 694 695  | Next Page >

  • Network DFS Shares Jumping Back To Root

    - by Taz
    We map several network drives to DFS locations via a logon script. Recently we've had a number of users complain of a very unusual behaviour when navigating these shares. They will be going through folders and will get 'rubber-banded' back to the root of the share. This will happen for a few minutes and then go back to behaving normally. The users are on Windows 7 and the fileshare is on Windows Server 2K8R2. Any idea what could be causing this annoying behaviour?

    Read the article

  • Extremely simple online multiplayer game

    - by Postscripter
    I am considering creating a simple multiplayer game, which focuses on physics and can accommodate up to 30 players per session. Very simple graphics, but smart physics (pushing, weight and gravity, balance) is required. After some research I found a good java script (framework ??) called box2d.js I found the demo to be excellent. this is is kind of physics am looking for in my game. Now, what other frameworks will I need? Node.js?? Prototype.js?? (btw, I found the latest versoin of protoype.js to be released in 2010...?? is this still supported? Should I avoid using it?) What bout HTML 5 and Canvas? would I need them? websockets? Am a beginner in web programming + game programming world. but I will learn fast, am computer science graduate. (but no much web expeience but know essentionals javascript, html, css..). I just need a guiding path to build my game. Thanks

    Read the article

  • logrotate> removing delaycompress function: should I compress the last log myself?

    - by user120006
    I'm removing the delaycompress function from my logrotating script. Before running logrotate again, should I compress the last log myself ? This is the actual situation: -rw-r----- 1 root adm 4,7M 5 mag 18:38 access.log -rw-r----- 1 root adm 5,2M 29 apr 05:44 access.log.1 -rw-r----- 1 root adm 473K 22 apr 05:45 access.log.2.gz -rw-r----- 1 root adm 605K 15 apr 05:44 access.log.3.gz -rw-r----- 1 root adm 588K 8 apr 05:44 access.log.4.gz The question is: Should I compress "access.log.1" and THEN launch logrotate ? Or logrotate will understand I removed the "delaycompress" option and fix things himself ?

    Read the article

  • Windows 7 unload driver/disconnect usb before restart

    - by brux
    In xp, my Tascam US 122 usb audio interface works fine. On my windows 7 computer the device works, however when i restart/shutdown it hangs at the "shutting down" screen forever. If the usb cable is removed prior to initiating restart/shutdown then everything goes fine, the computer restarts as normal. This is a known problem, running any other driver in compatibility mode doesnt fix the problem. Is there a method I can use to unload the device/drivers so that I dont have to reach round physically and disconnect the cable. Somebody suggested a spdt switch on the cable itself, but I am looking for a simpler method. Surely I can automate the unloading of the device through a script or such? Any ideas are greatly welcomed thanks

    Read the article

  • how to remotely open an URL in Firefox in a specific profile?

    - by miernik
    I have several instances of Firefox with several different profiles running. Among them profiles with the names "software" and "test". I am trying to open an URL from a bash script to have it open in profile "test", like this: firefox -P "test" http://www.example.org/ However that opens it in profile "software" anyway. Any ideas? Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.8) Gecko/20100308 Iceweasel/3.5.8 (like Firefox/3.5.8)

    Read the article

  • how to remotely open an URL in Firefox in a specific profile?

    - by miernik
    I have several instances of Firefox with several different profiles running. Among them profiles with the names "software" and "test". I am trying to open an URL from a bash script to have it open in profile "test", like this: firefox -P "test" http://www.example.org/ However that opens it in profile "software" anyway. Any ideas? Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.8) Gecko/20100308 Iceweasel/3.5.8 (like Firefox/3.5.8) No, it is not a permissions problem, all my profile directories are perfectly under my permissions: root@przehyba:~/.mozilla# ls -ld firefox/ drwx------ 13 miernik miernik 4096 Mar 11 09:15 firefox/ root@przehyba:~/.mozilla# ls -ld firefox/* drwxr-xr-x 9 miernik miernik 4096 Mar 12 11:29 firefox/info -rw-r--r-- 1 miernik miernik 560 Mar 11 09:15 firefox/profiles.ini drwxr-xr-x 10 miernik miernik 4096 Mar 16 11:51 firefox/software drwxr-xr-x 9 miernik miernik 4096 Mar 11 09:14 firefox/tech drwxr-xr-x 11 miernik miernik 4096 Mar 15 22:48 firefox/test root@przehyba:~/.mozilla#

    Read the article

  • rdiff-backup failed due to target machine being down, but is unkillable

    - by Markus
    My backup script was invoked by cron, using rdiff-backup to backup the user files onto a target system in the network. That target computer went down at some point, yet still appeared as mounted on the server. rdiff-backup didn't do anything, but still appears as a process. kill-ing doesn't stop it. Similarly, running rdiff-backup for other directories works but doesn't exit properly and remains in the process list. Is there anything short of rebooting the server that I can try?

    Read the article

  • How do I change the NGINX user?

    - by danielfaraday
    I have a PHP script that creates a directory and outputs an image to the directory. This was working just fine under Apache but we recently decided to switch to NGINX to make more use of our limited RAM. I'm using the PHP mkdir() command to create the directory: mkdir(dirname($path['image']['server']), 0755, true); After the switch to NGINX, I'm getting the following warning: Warning: mkdir(): Permission denied in ... I've already checked all the permissions of the parent directories, so I've determined that I probably need to change the NGINX or PHP-FPM 'user' but I'm not sure how to do that (I never had to specify user permissions for APACHE). I can't seem to find much information on this. Any help would be great! (Note: Besides this little hang-up, the switch to NGINX has been pretty seamless; I'm using it for the first time and it literally only took about 10 minutes to get up and running with NGINX. Now I'm just ironing out the kinks.)

    Read the article

  • Brightness going up to 100% on loading certain websites in Chrome

    - by picheto
    I'm using Google Chrome version 21.0.1180.89 on Ubuntu 12.04 and my laptop is a Sony VAIO VPCCW15FL (spec sheet). My video driver is the propietary "NVIDIA accelerated graphics driver (post-release updates)(version-current updates)". After installing Ubuntu, I discovered that neither the brightness control buttons (hardware) or the brightness slider (software) worked, and found out I could get the hardware buttons to work by installing the nvidiabl.deb package and oBacklight script. I'm using nvidiabl-dkms 0.77 and oBacklight 0.3.8. Still, the slider on the Ubuntu "Settings" does not work, but I don't care. There is an annoying thing happening when loading certain pages in Google Chrome: the brightness goes up to 100% when loading the webpage or when leaving it (closing the tab or typing a different URL on the omnibox). However, the "brightness tooltip" (that default brightness notification) remembers the position it was set to, so if I adjust the brightness with the HW buttons, the level gets adjusted relative to the value it was set to before "going 100%". I disabled the flash PPAPI plugin, but left the NPAPI plugin enabled, and the problem went away for pages with flash content. Still, the same thing happens when viewing HTML5 video, or when loading, for example, the Chrome Web Store or using the Scratchpad extension. I suppose it has to do with the rendering of certain elements using the GPU, but this is just a guess. This brightness thing does not happen when using Firefox 15.0 or any other application I have used yet. Does anybody know why this may be happening and what could I do to fix this without changing browser? Thanks a lot.

    Read the article

  • how do you uninstall an xampp installation that refuses to uninstall?

    - by Rick
    I downloaded xampp 1.7.3 (32) on windows 7 (64) in the programs(86) folder. both MySql and Apache refuse to run, they start and then instantly turn of. All ports are free. So I decide to uninstall however, when i run the uninstaller I receive the following error "Input Error: Can not find script file "C:\Program Files (x86)\xampp\uninst.temp\xampp_uninstall.vbs" XAMPP uninstall not OK Why is there spaces in the above line and does this matter "C:\Program Files (x86)" ? Can somebody please help me to understand the problem & uninstall xampp (or get it to work). It does not show in the control panel so I am stuck. All help is greatly appreciated. Thank you.

    Read the article

  • Determine process using a port, without sudo

    - by pat
    I'd like to find out which process (in particular, the process id) is using a given port. The one catch is, I don't want to use sudo, nor am I logged in as root. The processes I want this to work for are run by the same user that I want to find the process id - so I would have thought this was simple. Both lsof and netstat won't tell me the process id unless I run them using sudo - they will tell me that the port is being used though. As some extra context - I have various apps all connecting via SSH to a server I manage, and creating reverse port forwards. Once those are set up, my server does some processing using the forwarded port, and then the connection can be killed. If I can map specific ports (each app has their own) to processes, this is a simple script. Any suggestions? This is on an Ubuntu box, by the way - but I'm guessing any solution will be standard across most Linux distros.

    Read the article

  • Building a List of All SharePoint Timer Jobs Programmatically in C#

    - by Damon Armstrong
    One of the most frustrating things about SharePoint is that the difficulty in figuring something out is inversely proportional to the simplicity of what you are trying to accomplish.  Case in point, yesterday I wanted to get a list of all the timer jobs in SharePoint.  Having never done this nor having any idea of exactly how to do this right off the top of my head, I inquired to Google.  I like to think my Google-fu is fair to good, so I normally find exactly what I’m looking for in the first hit.  But on the topic of listing all SharePoint timer jobs all it came up with a PowerShell script command (Get-SPTimerJob) and nothing more. Refined search after refined search continued to turn up nothing. So apparently I am the only person on the planet who needs to get a list of the timer jobs in C#.  In case you are the second person on the planet who needs to do this, the code to do so follows: SPSecurity.RunWithElevatedPrivileges(() => {    var timerJobs = new List();    foreach (var job in SPAdministrationWebApplication.Local.JobDefinitions)    {       timerJobs.Add(job);    }    foreach (SPService curService in SPFarm.Local.Services)    {       foreach (var job in curService.JobDefinitions)       {          timerJobs.Add(job);       }     } }); For reference, you have the two for loops because the Central Admin web application doesn’t end up being in the SPFarm.Local.Services group, so you have to get it manually from the SPAdministrationWebApplication.Local reference.

    Read the article

  • rsync remote to local automatic backup

    - by Mark Molina
    Because all my work is stored on a remote server I would like to auto backup my server monthly and weekly. My server is running Centos 5.5 and while searching the web I'm found a tool named rsync. I got my first update manually by using this command in terminal: sudo rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP I then prompt my password for that user and bob's my uncle. This backups the necessary files from my remote server to my local device but does somebody know how I can automate this? Like automatic running this script every sunday? EDIT I forgot to mention that I let direct admin backup the files I need and then copy those files from the remote server to a local server.

    Read the article

  • SEO strategy for h1, h2, h3 tags for list of items

    - by Theo G
    On a page on my website page I have a list of ALL the products on my site. This list is growing rapidly and I am wondering how to manage it from an SEO point of view. I am shortly adding a title to this section and giving it an H1 tag. Currently the name of each product in this list is not h1,2,3,4 its just styled text. However I was looking to make these h2,3,4. Questions: Is the use of h2,3,4 on these list items bad form as they should be used for content rather than all links? I am thinking of limiting this main list to only 8 items and using h2 tags for each name. Do you this this will have a negative or possible affect over all. I may create a piece of script which counts the first 8 items on the list. These 8 will get the h2, and any after that will get h3 (all styled the same). If I do add h tags should I put just on the name of the product or the outside of the a tag, therefore collecting all info. Has anyone been in a similar situation as this, and if so did they really see any significant difference?

    Read the article

  • Network print to brother MFC-7420

    - by trampster
    I am trying to pint to a Brother MFC-7420 from my ubuntu 10.04 machine. The brother is attached to a windows XP machine and is shared. This is what I have tried: System-Administration-Printing, Add, Expand Network Printer, Windows Printer via SAMBA, Browse (I can find the printer no problems here), Foward, Choose Driver Dialog, Brother, My printer is not in this list So the next thing I tried was to download the printer driver from here http://welcome.solutions.brother.com/bsc/public_s/id/linux/en/download_prn.html The driver installed fine but my printer still does not appear in the list. I also tried installing the cups wrapper but that gave the following error. Restarting Common Unix Printing System: cupsd [ OK ] cp: cannot stat `/usr/share/cups/model/MFC7420.ppd': No such file or directory dpkg: error processing cupswrappermfc7420 (--install): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: cupswrappermfc7420 I tried connecting the printer directly but even though I have installed the driver, when I go to printers and click on the printer (it shows up fine as a USB printer) then it say searching for drivers and then gives me a list, this is the same list as before which doesn't have my printer. It really shouldn't be this hard. on window you don't have to installing anything it just works and the same is true for my brothers Mac. How do I print to my printer?

    Read the article

  • Groovy Grapes in NetBeans IDE

    - by Geertjan
    The start of Groovy Grapes support in NetBeans IDE. Below you see a pure Groovy project, with the Groovy JAR and the Ivy JAR automatically on its classpath. There's also a Groovy script that makes use of a @Grab annotation. In the bottom left, in the Services window, you also see a Grape Repository browser, i.e., showing you the JARs that are currently in ".groovy/grapes". Click the images below to get a better look at them. Next, you see what happens when the project is run. The @Grab annotation automatically starts downloading the JARs that are needed and puts them into the ".groovy/grapes" folder. However, the "no suitable classloader found for grab" error message (which Google shows is a problem for lots of developers) prevents the application from running successfully: The final screenshot shows that I've put the JARs that I need onto the classpath of the project. I did that manually, hoping to learn from the NetBeans Maven project or the NetBeans Gradle project how to do that automatically. Also note that the @Grab annotation has been commented out. Now the error message about the classloader is avoided and the project runs. What needs to happen for Groovy Grapes support to be complete in NetBeans IDE: Figure out how to add the downloaded JARs to the project classpath automatically. Fix the refresh problem in the Grape Repository browser, i.e., right now the refresh doesn't happen automatically yet. Hopefully find a way to get around the grab classloader problem, i.e., it's not ideal that one needs to comment out the annotation. Let the user specify a different Grape repository, i.e., right now ".groovy/grapes" is assumed, but the user should be able to point the repository browser to something different. Maybe there should be support for multiple Grape repositories? Comments/feedback/help is welcome.

    Read the article

  • Misbehaving Network Printers - options?

    - by Dan Kelly
    We are having some issue with printers on our network. We have 3 floors, 2 printers per floor (A3 & A4) all connected to the same Print Server. The issue is that the same printer may not behave the same on two different, seemingly identical desktops. The commonest place this is seen on our bulk print script in AutoCAD - occasionally drawings may print Landscape on Portrait paper, despite drawings always being Landscape... Does any one have any suggestions on what we can check / try? The current line of attack is to setup a new Print Server, with the HP universal print driver rather than the device specific drivers, and replace printers using exactly the same method on all desktops. Sound good?

    Read the article

  • Configure Git to use Beyond Compare for image diff

    - by Barney
    Because we work with a number of sprites, the kind of specialised diff views provided by Beyond Compare would be ideal to see which one of 2 versions I'm after when conflicts arise. I've already configured Git to use Beyond Compare as my primary diff and merge tool as described in their integration guide — it specifically goes into how to configure TortoiseSVN to use it for images, and I've found these articles talking about .gitattributes in general and how to script interactions from a *nix shell — but it's not obvious to me how I can use the advice provided by these guides to make a simple change that would say "use the default diff & merge bindings for files determined to be images, too". For the record, I'm doing all this on Windows :P

    Read the article

  • Force Windows to cache executables without running them?

    - by Josh Einstein
    Is there a way to force Windows to pre-load certain EXE/DLL binaries into its prefetch/superfetch cache as if they had been executed? I have a particular application that loads pretty slowly on first run but if it's "warm" (recently executed) it starts pretty quickly. I'd like to prime the cache early in the background before the application is needed. But since it shows a UI, I'm looking for a way to do this silently. So simply launching the application it isn't ideal. Thanks you in advance. Prompted by David's suggestion in the comments, I wrote a PowerShell script to memory map the files, seek to the end, and close them. I haven't done any controlled tests yet and it could just be my imagination, but Sublime Text (the application in question) appeared to load much more quickly this time around and I haven't used it for several hours.

    Read the article

  • How to Install WebLogic 12c ZIP on Linux

    - by Bruno.Borges
    I knew that WebLogic had this small ZIP distribution, of only 184M, but what I didn't know was that it is so easy to install it on Linux machines, specially for development purposes, that I thought I had to blog about it. You may want to check this blog, where I found the missing part on this how to, but I'm blogging this again because I wanted to put it in a simpler way, straight to the point. And if you are looking for a how to for Mac, check Arun Gupta's post.  So, here's the step-by-step: 1 - Download the ZIP distribution (don't worry if your system is x86_64)Don't forget to accept the OTN Free Developer License Agreement! 2 - Choose where to install your WebLogic server and your domain, and set as your MW_HOME environment variableI will use /opt/middleware/weblogic for this how to export MW_HOME=/opt/middleware/weblogicMake sure this path exists in your system. 'mydomains' will be used to keep your WebLogic domain. mkdir -p $MW_HOME/mydomain 3 - If you don't have your JAVA_HOME environment variable still configured, do it. Point it to where your JDK is installed. export JAVA_HOME=/usr/lib/jvm/default-java 4 - Unzip the downloaded file into MW_HOME unzip wls1211_dev.zip -d $MW_HOME 5 - Go to that directory and run configure.sh cd $MW_HOME ./configure.sh 6 - Call the setEnvs.sh script . $MW_HOME/wlserver/server/bin/setWLSEnv.sh7 - Create your development domain. It will ask you for username and password. I like to use weblogic / welcome1cd $MW_HOME/mydomain $JAVA_HOME/bin/java $JAVA_OPTIONS -Xmx1024m \ -Dweblogic.management.allowPasswordEcho=true weblogic.Server8 - Start WebLogic and access its web console(sh startWebLogic.sh &); sleep 10; firefox http://localhost:7001/consoleUsually, it takes only 10 seconds to start a domain, and 5 more to deploy the Administration Console (on my laptop). :-)Enjoy!

    Read the article

  • How can I audit a Linux filesystem for files which have been changed or added within a specific time

    - by Bcos
    We are a website design/hosting company running several sites on a Linux server using Joomla 1.5.14 and recently someone was able exploit a vulnerability in the RW Cards component to write arbitrary files/modify existing files on our filesystem enabling them to do some nasty things to our customers sites. We have removed vulnerable modules from all sites but are still seeing some problems. We suspect that they still have some scripts installed and need a way to audit anything that has been changed or added in the last 10 days. Is there a command or script we can run to do this?

    Read the article

  • Convert uploaded video files to mp4 using PHP [closed]

    - by Subin
    I created a PHP video uploading script. I need to convert these files to mp4 for HTML5 VIDEO PLAYER using PHP while uploading . How can I do that ? Here is the PHP code. <?php if(isset($_POST['submit'])){ $user=$_COOKIE['VisitorName']; include('config.php'); session_start(); $session_id='1'; //$session id $path = "/home/simsu/subins/videos/data/videos/"; $valid_formats = array("wmv", "ogv", "mp4", "3gp", "ogg"); if(isset($_POST) and $_SERVER['REQUEST_METHOD'] == "POST") { $name = $_FILES['uploadedfile']['name']; $size = $_FILES['uploadedfile']['size']; if(strlen($name)) { list($txt, $ext) = explode(".", $name); if(in_array($ext,$valid_formats)) { if($size<(100024*100024)) { $actual_image_name = $path.time().".mp4"; $tmp = $_FILES['uploadedfile']['tmp_name']; $upurl="http://vtube.subins.com/files/video?vid=".time(); $title=$_POST['vn']; mysql_query("INSERT INTO videos(title,user,url,vid,ext) VALUES ('$title', '$user','$upurl',NOW(),'$ext')"); echo '<br><h1>'.$_FILES['uploadedfile']['name'] . " uploaded.</h1>"; } else echo "<br><h1>Video file size max 100 MB"; } else echo "<br><h1>Invalid file format.."; } else echo "<br><h1>Please select a video..!"; exit; } } ?>

    Read the article

  • Best practices in versioning

    - by Gerenuk
    I develop some scripts for data analysis in a small team. For the moment we use SVN, but not in a very structured way. We haven't even looked how to use branches even though we need this functionality. What do you suggest as the best practice to setup the following system: two code bases (core and plugins) versions can be incompatible to previous scripts sometimes individual features are being developed and not yet finished, while other fixes have to be done urgently to the code In the end we don't deliver the code as a package, but rather place the Python scripts in some directory (with version names?). Some other python script which serves as a configuration choses the desired version, sets the path to these libraries and then starts to import the modules. I saw stable releases to be named "trunk" so I did the same. However, no version numbers yet. Core and plugins are different repositories, however we have to match versions for compatibility. Can you suggest some best practices or reference to ease development and reduce chaos? :) Some suggested GIT. I haven't heard about it, but I'm free to change.

    Read the article

  • mod_rewrite and SEO friendliness

    - by John Doe
    My website has an atypical structure and I'm not sure if this could create problems in the long run, specially for SEO positioning purposes. I have a unique, large PHP script, and I use the Apache module mod_rewrite in the .htaccess file to create friendly URLs, for example: RewriteRule ^$ /index.php?section=Main RewriteRule ^createArticle$ /index.php?section=Main&view=CreateArticle RewriteRule ^configuration$ /index.php?section=Configuration RewriteRule ^article/([0-9]{1,10})$ /index.php?section=Article&view=Default&id=$1 RewriteRule ^deleteArticle/([0-9]{1,10})$ /index.php?section=Article&view=Delete&id=$1 RewriteRule ^reportArticle/([0-9]{1,10})$ /index.php?section=Article&view=Report&id=$1 RewriteRule ^logIn$ /index.php?section=Authentication ... So, www.example.com/index.php?section=Article&view=Default&id=105 would become www.example.com/article/105. The only real physical file is index.php, in which the parameters of the URL queried is processed and the corresponding result is outputted. My question is, do the crawling robots (e.g. Googlebot) recognize these links? Do they index the resulting HTML outputted by index.php with the specified parameters as if it was a actual HTML file? Also, would this become a problem when creating a Sitemap?

    Read the article

  • Double try_files to solve the nginx's "No input file specified" issue

    - by Howard
    I am following the nginx's wiki (http://wiki.nginx.org/WordPress) to setup my wordpress location / { try_files $uri $uri/ /index.php?$args; } By using the above lines, when a static file which is not found it will redirect to index.php of wordpress, that is okay but.. Problem: When I request an non-existence php script, e.g. http://www.example.com/foo.php, nginx will give me No input file specified I want nginx to return 404 instead of the above message, so in the main fcgi config, I add the 2nd try_files location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include /etc/nginx/fastcgi_params; ... } And this worked, but I am looking if there are any better way to handle it?

    Read the article

< Previous Page | 684 685 686 687 688 689 690 691 692 693 694 695  | Next Page >