Search Results

Search found 23079 results on 924 pages for 'local variables'.

Page 606/924 | < Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >

  • Single domain user can't install a specific shared network printer

    - by drpcken
    I have a file server serving up shared network printers. Never had any issues in the past. I have a specific domain user (just 1) that gets this error when trying to install a specific printer: You do not have sufficient access to your computer to connect to the selected printer This user can install all the other printers no problem. All my other domain users can install the printer with no problem. I've removed the driver from the local client and tried again but with the same problem. Even as an administrator I get this error. Is there something I'm missing?

    Read the article

  • NFS share access - Permission denied

    - by rgngl
    I'm trying to share a directory on my NAS device(WD Mybook WE) with NFS to another machine on my local network. The directory on the NAS device looks like this: drwxr-x--- 15 git git 4096 Nov 17 01:05 git/ And id's of the user git on the NAS device is like this: [root@myhost DataVolume]# id git uid=505(git) gid=505(git) I played with many different parameters in the /etc/exports file and this is what I got there currently: /DataVolume/git 192.168.0.20(async,rw,no_root_squash,no_subtree_check) On the client side I have the user git and group git with the same id's to match the ones on the server. user@myclient:~$ id git uid=505(git) gid=505(git) groups=505(git) I mount the directory with: sudo mount myhost:/DataVolume/git -t nfs git/ and the mounted directory looks like: drwxr-x--- 15 git git 4096 Nov 17 01:05 git After these steps I can't seem to cd to that directory with any user, including git and root. I am getting a Permission denied error. Thanks in advance for any help.

    Read the article

  • Can't start a service (sudo) remotely from script and keep it running

    - by Greg Bernhardt
    I have a service (tomcat) that needs sudo to be started. I made a simple script on the remote server in /root/bin/test.sh #!/bin/sh sudo service tomcat start read (The script needs to do other stuff too, just pared down for simplicity). When I run a it directly on the remote server, tomcat starts and continues running on the server after I disconnect. When I run it remotely, the process starts, (I can see it when paused for the "read"), but once the script ends, it's gone. (while paused for the read, run this command locally) ps -ef | grep tomcat I've tried various combinations of nohup, screen, and & on the commands both on the local machine and in the remote machine's test.sh script, but I can't seem to get it working. ssh -t [email protected] "/root/bin/test.sh" ssh -t [email protected] "nohup /root/bin/test.sh" ssh -t [email protected] "nohup /root/bin/test.sh &" ssh -t [email protected] "screen /root/bin/test.sh &"

    Read the article

  • Decreasing lagging on router, while gaming

    - by user2699451
    I had absolutely no idea where to post this question and get a professional answer for it but here goes... Okay, so I guess everyone whos is reading this had played online, and so I was playing LoL again tonight and my brother decided that now was a great time to go on youtube and start watching a movie, so my ping (connecting from South Africa to EU west server) is around 190-220 average, however it started spiking to 2000 and average was 600-800, so it arised the question, how ther hell can I "kick" him off for the time being I tried reasoning it out with him but its like playing chess with a pigeon, he's studying to be an engineer, and I just cant win an argument with him, so i need to step it up a level... I have in the past used the aireplay method by sending deauth packets but it only helped so much, is there another way of either kicking a peer of the local wifi or decreasing the lag spikes while in session or even splitting the bandwidth equally in 2 or 3,etc What do I do p.s. sorry if off topic, if it is not appropriate, just say which website will be able to help or assist me...

    Read the article

  • JavaOne LAD Call for Papers

    - by Tori Wieldt
    JavaOne LAD Call for Papers closes next Friday, October 4. Here are Java Evangelist Steven Chin's top three reasons why you submit a session:1) Imagine a parallel world where Java is king. Where the government has mandated that all software be open-source and recognized Java as an official platform. That is exactly what happened in Brazil and it shows in all aspects of their country from government systems to TV standards.2) A JUG in Every Village - Brazil has the most user groups of any country in the world by a significant margin. "I've stayed after JavaOne to visit several cities and gotten a great audience whether it was a large city like Brasilia or Goiania, or a coastal town like Fortaleza, Salvador, or Maceio," Chin explains.3) A Community-Supported Conference - SouJava and the entire Brazilian user group community is active and involved with JavaOne Brazil, making it a really engaging regional JavaOne conference. Submissions should be: From the community, all proposals should be non-Oracle. Java-related topics (not technologies such as Flex, .NET, Objective C, etc... unless it's specifically a topic about how such things INTEGRATE with Java) Non-product pitches Interesting/innovative uses of Java Practical relevant case studies/examples/practices/etc. The call for papers will close on Friday, October 4, 2012 at 11:59 pm local time. We look forward to hearing from you!

    Read the article

  • Will NTP work in an isolated network ( in absence of a reliable time source)

    - by Anand
    Hi, I am investigatiing a typical NTP problem. The setup is as follows :- FreeBSD is being compiled and run on Opensolaris. The config file on OpenSolaris has entry of linux and another opensolaris machine as server and these server machines are syncing time with themselves (local clock) only. The server machines in this case have NTP running on them as well. Within a few minutes of starting the ntp daemon ,client starts syncing time with itself only and remains in that situation after that.All servers are discarded and no time syncing is done with them. My question is , is there any fundamental problem with this setup. Will the NTP work in such isloated network that has no direct or indirect connection with reliable internet time source ?

    Read the article

  • How should I capture Linux kernel panic stack traces?

    - by Alnitak
    What's current best practice to capture full kernel stack traces on a Linux system (RHEL 5.x, kernel 2.6.18) that occasionally panics in a device driver? I'm used to the "old" SunOS way of doing things - crash dumps get written to swap, and on reboot the dump gets retrieved in the local file system. man 8 crash refers to diskdump, but that appears to be unsupported. and/or deprecated. I've played with kdump, but it's unclear whether I can get a stack trace from that. Triggering a panic via Magic SysRq didn't create one. It also seems wasteful to reserve so much memory (128MB) just for a kexec crash recovery kernel.

    Read the article

  • pymatplotlib without xserver

    - by vigilant
    Is it possible to use networkx or pymatplotlib without an xserver running? I keep getting the following error with their first example (of networkx): Traceback (most recent call last): File "test.py", line 17, in <module> nx.draw(G,pos,node_color='#A0CBE2',edge_color=colors,width=4,edge_cmap=plt.cm.Blues,with_labels=False) File "/usr/local/lib/python2.6/dist-packages/networkx-1.3-py2.6.egg/networkx/drawing/nx_pylab.py", line 124, in draw cf=pylab.gcf() File "/usr/lib/pymodules/python2.6/matplotlib/pyplot.py", line 276, in gcf return figure() File "/usr/lib/pymodules/python2.6/matplotlib/pyplot.py", line 254, in figure **kwargs) File "/usr/lib/pymodules/python2.6/matplotlib/backends/backend_tkagg.py", line 90, in new_figure_manager window = Tk.Tk() File "/usr/lib/python2.6/lib-tk/Tkinter.py", line 1646, in __init__ self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use) _tkinter.TclError: couldn't connect to display ""

    Read the article

  • Datacenter Backup Strategy

    - by EasyEcho
    What are common approaches to backup solutions in remote data centers? I am already familiar with general backup principals and have a very good backup strategy for our local data center but am having great difficulty extending it to a remote data center. We currently do a full backup on Friday, differential Mon - Thu, rotate offsite Friday morning ...rinse and repeat week after week. BTW, we use disks and have been very happy with this approach. We could buy a large storage server and backup everything to it, but this solution doesn't give you offsite. We could encrypt and upload to Amazon or some other online storage but that would take a large amount of time given the data and would be rather expensive paying for the bandwidth leaving the data center and receiving at amazon. We could drive to the data center every Friday and continue to rotate disks as we do now. But that just seems old fashion. What am I missing, are there better options?

    Read the article

  • Transparently cache files from a network drive in Linux

    - by Vadim
    We have a Linux server that reads files from a network drive and processes them. In a common scenario, a user will log in and access the same files over and over again. The size of the files varies but the larger ones can be around 50+ Mb. The files seldom change. I was wondering if it's somehow possible to transparently cache the files. I don't want (or can) change the program the reads the files, nor do I control the protocol by which the files are accessed. I just want something to detect that I access a certain path, copy the file locally (if needed) and then read the file from the local drive. I've read about Bcache but can't figure out if it's what I need. Do you have any suggestions? Thanks, Vadim.

    Read the article

  • Why is rsync.exe [cwRsync] trying to open a port when in client mode?

    - by hemancuso
    I'm trying to use a cygwin compiled version of rsync [the cwrsync package] on Windows and in seemingly whatever configuration I test in there is a request to the user presented by Windows Firewall to allow inbound traffic. If you deny this request, everything works fine - as expected. I'm doing a vanilla push rsync.exe localpath user@remotepath:/absolutepath and it works just fine. I've also attempted this command having deleted ssh from the path and using rsync on local paths - still a firewall prompt. Why is this listen() happening and is there a way I can force the client to not attempt to listen without recompiling and maintaing a patch?

    Read the article

  • Why can't tuxboot and ubuntu play well together?

    - by mmr
    I'm trying to get clonezilla to run off of a usb stick, and it seems that the right way to do that is via tuxboot. Tuxboot is not compilable on ubuntu. I used git to get it from the repository, and then when I run the 'install' script (because building it is apparently not allowed, since the build script just tries to install windows things). Qmake-linux wants my qmake executable to be in the same directory as the stuff I pulled down, and let's just say that if there's a way to do this easily, I ain't seein' it. So then I download the linux file, the most recent of which is tuxboot-linux-25. Try to run it, get a failure that libpng12.so.0 isn't found. OK, then I go to install that via the instructions I found on the web but firefox seems to have already deleted from my history (yay!) Then I add the /usr/local/lib directory to ldconfig via emacs (had to install that too, of course): http://ubuntuforums.org/showthread.php?t=369848 I still get the errors that libpng12.so.0 cannot be opened because 'No such file or directory'. ldconfig -p | grep libpng shows that the library is there, but it still doesn't seem to be findable. What to do next? (for the record, doing this in windows is painless-- download, click, and it's done. But I'm trying to be all linuxy and get away from Windows for this...)

    Read the article

  • Chrome - Why am I automatically authenticated to a web app even after clearing browser cookies?

    - by Howiecamp
    I am accessing a web application using Chrome. If I sign out of the app and clear all Chrome history/cookies/etc (even Flash cookies which are now handled by Chrome in the same Clear History area) and then re-access the site, I am automatically logged in without being prompted for credentials. I then launched Chrome in Incognito mode and was able to reproduce the same behavior. However, the I was prompted upon the first logon while in Incognito mode. The web application behaves as expected in Internet Explorer 10. Some info about the application: It's a Sharepoint site using NTLM authentication The credentials are Active Directory-based, as the username is domain\username My connection is over the Internet and there is no AD relationship between my local Windows account, my Windows PC. In other words I (meaning my locally logged on user and my PC) are not in any way part of their AD domain. The site is running SSL on port 443 Why might Chrome be automatically authenticating me?

    Read the article

  • Update the remote of a git branch after name changing

    - by Dror
    Consider the following situation. A remote repository has two branches master and b1. In addition it has two clones repo1 and repo2 and both have b1 checked out. At some point, in repo1 the name of b1 was changed. As far as I can tell, the following is the right procedure to change the name of b1: $ git branch b1 b2 # changes the name of b1 to b2 $ git push remote :b1 # delete b1 remotely $ git push --set-upstream origin b2 # create b2 remotely and direct the local branch to track the remote 1 Now, afterwards, in repo2 I face a problem. git pull doesn't pull the changes from the branch (which is now called remotely b2). The error returned is: Your configuration specifies to merge with the ref 'b1' from the remote, but no such ref was fetched. What is the right way to do this? Both the renaming part and the updating in other clones?

    Read the article

  • Creating a SQL Azure Database Should be Easier

    - by Ken Cox [MVP]
    Every time I try to create a database + tables + data for Windows Azure SQL I get errors.  One of them is 'Filegroup reference and partitioning scheme' is not supported in this version of SQL Server.' It’s partly due to my poor memory (since I’ve succeeded before) and partly due to the failure of tools that should be helping me. For example, when I want to create a script from an existing database on my local workstation, I use SQL Server Management Studio (currently v 11.0.2100.60).  I go to Tasks > Generate Scripts which brings up the nice Generate and Publish Scripts wizard. When I go into the Advanced button, under Script for Server Version, why don’t I see SQL Azure as an option by now? The tool should be sorting this out for me, right? Maybe this is available in SQL Server Data Tools? I haven’t got into that yet. Just merge the functionality with SSMS, please. Anyway, I pick an older version of SQL for the target and still need to tweak it for Azure. For example, I take out all the “[dbo].” stuff. Why is it put there by the wizard? I also have to get rid of "ON [PRIMARY]"  to deal with the error I noted at the top. Yes, there’s information on what a table needs to look like in SQL Azure but the tools should know this so I don’t have to mess with it.

    Read the article

  • Protect apache pages by URL

    - by Thomas
    Is it possible to allow access to specific URLs only to certain networks? Basically, I would like to restrict access to the admin area only to the local network This area's pages are prefixed by /admin Essentially, I would like all /admin/* to be forbidden from public access. Can apache handle such a case? Thanks UPDATE Using your suggestions I came up to <LocationMatch admin> Order allow,deny deny from all Allow From 192.168.11.0/255.255.255.0 </LocationMatch> However, I get 403 even though I am on the network. Additionally, if I put apache behind haproxy, is this going to work? Because the traffic will be coming from 127.0.0.1 to apache

    Read the article

  • Is anyone familiar with this message in Kapersky Internet Security 2010?

    - by tintincutes
    Hi I just started my computer up & opened a website for my checking my mail, when I opened a Kaspersky Internet Security 2010 window popped up. C:\Documents and Settings\username\Local Settings\Application Data\Mozilla\Firefox\Profiles\rse47wp8.default\Cache\C\79\D5CC9d01 Does anyone is familiar with this? I checked the path and this path doesn't exist. I couldn't remember that I have a file of D5CC9d01 once. Can somebody please tell me if this is a virus or not? Thanks

    Read the article

  • Permissions to run a SharePoint 2010 Application Pool?

    - by Michael Stum
    I'm currently in the process of setting up a SharePoint 2010 farm. In my Dev Environments, I have one account that is Local Admin, Farm Administrator and runs all Application Pools. For Production Environment, I would like to go with best Security Practices and run the Web Applications (At least 2: Main Portal and My Sites) with separate Domain Accounts. It's been some time that I worked with IIS, and I remember that there were issues with accessing files in c:\inetpub by non-Admin users. On the other hand, SharePoint "automagically" sets most permissions anyway. Does anyone have some experience with which permissions I need to give to the domain account at minimum in order to work?

    Read the article

  • MacBook repeatedly disconnects from Wi-Fi

    - by redwall_hp
    I have an early 2008-model MacBook (2.4 GHz). The Wi-Fi router I have at home is a Linksys WRT54GX2 that I have had for a few years. My MacBook has recently started disconnecting from the router every few minutes, which is rather annoying. I can reconnect again without having to restart the router or anything, as it seems that the MacBook is just dropping the connection. I have tried changing the channel on the router, and upgrading the laptop from Leopard to Snow Leopard made no difference either. I'm only about six feet from the Linksys device, so distance isn't an issue. This only happens with the Linksys router, while I can use the local library's open network without any issues. The problem also seemingly becomes more pronounced after midnight. What could the problem be? Edit: Here are the logs that Spiff requested: http://pastie.org/951761

    Read the article

  • Internet slow on one router only [the problem only in Ubuntu] [on hold]

    - by mrSuperEvening
    Internet works perfectly on every other router, but browsing sucks at home (slow browsing and slow loading times). I changed DNS servers to 8.8.0.0, still doesn't help. And funnily, download speed is extremely high on this network (meaning torrents for example), but using browsers and loading websites is extremely slow (only on this network). Do I need to change something in router settings or what can I try? By the way, I use wired connection to router. EDIT: There's no problems when using Windows. EDIT: ifconfig: eth0 Link encap:Ethernet HWaddr f2:4d:a0:c0:3f:4c inet addr:192.168.11.8 Bcast:192.168.11.255 Mask:255.255.255.0 inet6 addr: fe80::f24d:a2ff:fec6:3f4c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:206798 errors:0 dropped:0 overruns:0 frame:0 TX packets:219570 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:76680734 (76.6 MB) TX bytes:21738160 (21.7 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:160 errors:0 dropped:0 overruns:0 frame:0 TX packets:160 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:11094 (11.0 KB) TX bytes:11094 (11.0 KB)` ping -c 2 4.2.2.2 PING 4.2.2.2 (4.2.2.2) 56(84) bytes of data. --- 4.2.2.2 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1007ms ping -c 2 google.com PING google.com (213.159.32.147) 56(84) bytes of data. 64 bytes from lan-213-159-32-147.kns.skynet.lv (213.159.32.147): icmp_seq=1 ttl=61 time=0.936 ms 64 bytes from lan-213-159-32-147.kns.skynet.lv (213.159.32.147): icmp_seq=2 ttl=61 time=0.937 ms --- google.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.936/0.936/0.937/0.030 ms

    Read the article

  • lamp server permissions on development server

    - by user101289
    I run a LAMP server on a ubuntu laptop I use only for development. I am not greatly concerned with security, since the server is never accessible outside the local network, and it's turned off when I'm not using it. My question is what is the simplest and 'best' way to set permissions/users/groups so that when my myself user creates, edits or writes files in the webroot, I won't need to go through and CHMOD / CHOWN everything back to the www-data user? Should I add myself to the www-data group? Or chown the webroot to www-data:myself? Or is there a best practice for this situation so I don't have to keep re-setting the ownership of these files? Thanks

    Read the article

  • Moving domain currently on Google Apps to my own machines

    - by mag
    I own a domain which currently uses Google Apps (it was free back then). Due to recent events I have decided I want to move everything right where I can control the data, which basically means I have the idea of either using a dyndns solution and have at home my own mail server or, better, use an hosted machine to retrieve the mail and the move it via mua to a local machine. My problem is that I know basically next to nothing about DNS and stuff, while I have installed mail servers before. :) [I simply never had the need to work with DNS...] I'd really appreciate if someone could point me in the right direction: what do I need to do to: terminate the Google Apps management of the domain setup dns entries to that mail goes to the new server instead of Google Apps Additionally: if I decide to go the "hosted route" and I put up a machine via a supercheap hosting seller (say, the cheapest OVH one is a few euros a month), how can I tell the world that that machine is the mail server? Of course I would be wise to do a full backup of the mail on Google Apps, and that's easy enough to do by imap (right?).

    Read the article

  • Why does 12.04 try but fail to hibernate, even after I enabled hibernation?

    - by Roger Davis
    Regarding the below (-----) info from another post, my system will try (as is said below) to hibernate, but it won't get all the way there. Hard drive activity stops, but it does not shut down. If I turn the power off, then back on, it will start, but I have to "restore previous session" in the browser and other open apps don't restart, with the accompanying hassle. So because it tries, will the suggested fix then cause it to work correctly, or is the literal "try" not really what he means?!? PLEASE NOTE - this is a desktop system, not a laptop. Before enabling hibernation, please try to test whether it works correctly by running pm-hibernate in a terminal. The system will try to hibernate. If you are able to start the system again then you are more or less safe to add an override. To do so, start editing sudo nano /etc/polkit-1/localauthority/50-local.d/com.ubuntu.enable-hibernate.pkla Fill it with this [Re-enable hibernate by default] Identity=unix-user:* Action=org.freedesktop.upower.hibernate ResultActive=yes Save by pressing Ctrl-O and exit nano by pressing Ctrl-X Restart and hibernation is back!

    Read the article

  • Git can no longer open emacs as its editor

    - by mwilliams
    I'm running Git version 1.7.3.2 that I built from source, zsh is my shell, and emacs is my editor. Recently I started seeing the following: /usr/local/Cellar/git/1.7.3.2/libexec/git-core/git-sh-setup: line 106: emacs: command not found Could not execute editor My zshrc looks like the following so I can use the Cocoa build and the console binary provided with it. EMACS_HOME="/Applications/Emacs.app/Contents/MacOS" function e() { PATH=$EMACS_HOME/bin:$PATH $EMACS_HOME/Emacs -nw $@ } function ec() { PATH=$EMACS_HOME/bin:$PATH emacsclient -t $@ } function es() { e --daemon=$1 && ec -s $1 } function el() { ps ax|grep Emacs } function ek() { $EMACS_HOME/bin/emacsclient -e '(kill-emacs)' -s $1 } function ecompile() { e -eval "(setq load-path (cons (expand-file-name \".\") load-path))" \ -batch -f batch-byte-compile $@ } alias emacs=e alias emacsclient=ec And I also have export EDITOR="emacs" and have tried adding export GIT_EDITOR="emacs" (and swapping that out with "e") But whatever I try I can't get git to open emacs whenever I need to do a commit or an interactive rebase, etc etc...

    Read the article

  • Iptables REDIRECT + openvpn problem

    - by Emilio
    I want to redirect connection to port 22 to my openvpn binded port, on 60001. Openvpn is running on server on 60001 server:~$ sudo netstat -apn | grep openvpn udp 0 0 67.xx.xx.137:60001 0.0.0.0:* 4301/openvpn I redirect on server port 22 to 60001 server:~$ sudo iptables -F -t nat server:~$ sudo iptables -A PREROUTING -t nat -p udp --dport 22 -j REDIRECT --to-ports 60001 I start openvpn client (openvpn.conf is correct, it works with remote IP 22 replaced with remote IP 60001) client:~$ ./openvpn openvpn.conf Tue Apr 27 00:42:50 2010 OpenVPN 2.1.1 i686-pc-linux-gnu [SSL] [EPOLL] built on Mar 23 2010 Tue Apr 27 00:42:50 2010 UDPv4 link local (bound): [undef]:1194 Tue Apr 27 00:42:50 2010 UDPv4 link remote: 67.xx.xx.137:22 Tue Apr 27 00:42:52 2010 read UDPv4 [ECONNREFUSED]: Connection refused (code=111) Tue Apr 27 00:42:55 2010 read UDPv4 [ECONNREFUSED]: Connection refused (code=111) ... It doesn't connect. iptables shows requests from client to server but no answers. What's wrong with it?

    Read the article

< Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >