Search Results

Search found 17856 results on 715 pages for 'setup py'.

Page 447/715 | < Previous Page | 443 444 445 446 447 448 449 450 451 452 453 454  | Next Page >

  • Jumbo Frames on DIR-655

    - by Spookyone
    Hello, I am trying to set up jumbo frames on my gigabit home LAN but no luck so far. My setup is: D-Link DIR-655 router, HW Revision A3, Firmware 1.21 EU Synology DS107+, Firmware 3.0-1337 Laptop w/ Win7 x64, external PCIx NIC managed by "Generic Marvel Yukon 88E8053 based Ethernet Controller" The router is supposed to support jumbo frames but doesn't feature any relevant setting. I set the Jumbo Packet value to 9000 on both the NIC and the Synobox but it doesn't work, ping -f -l 8972 says "Packet needs to be fragmented but DF set". Is there any other setting I overlooked, the DIR-655 doesn't actually support jumbo frames, or what else could be the problem?

    Read the article

  • Unity in 12.10 comes up behind other windows

    - by ams
    I've just upgraded from 12.04 to 12.10. For the most part, everything works fine, but I have a few small problems with Unity, or maybe Compiz. When I hit the Super key, or click on the dash launcher, the dash sometimes comes up behind the other windows on the screen. As you can imagine, this makes it somewhat tricky to use. Once it has started coming up behind, no amount of trying again will convince it to come back to the front. Possibly related, the Alt-Tab switcher doesn't show either. It maybe that there isn't one, or maybe that's behind also? Alt-Tab does switch the windows, but there's no visual indicator. When I hit Super-W, the windows do all do the zoom thing, but it's slow and juddery where it used to be smooth in 12.04. I'm using the standard "radeon" driver, same as before, with a triple-head monitor setup (and that works fine). I've not tried the proprietary drivers as I've previously found the multi-monitor support much weaker than the default driver, but maybe that's the way to go now? Video play fine. Even WebGL seems ok. Do other see this problem? Is it a bug? Or have I just got some left-over config from 12.04 in the way?

    Read the article

  • Rsync and Windows 7

    - by Nate
    Can someone give me any tips on setting up some sort of Rsync server/client on Windows 7 to run rsync between both my web hosting server, and a backup server that I have running Ubuntu? I've tried setting it up with this tutorial: http://www.youtube.com/watch?v=CvwdkZLNtnA Using copssh, and cwrsync. Ran into all sorts of troubles, including not being able to get cwrsync to run (it installs properly, but never starts up), and copssh not generating the keys at all. The guy was running Windows Server 2003, though, so I'm guessing the problems could just be because I'm running Windows 7. I've been trying to set it up with my Windows machine being the rsync server, and then Ubuntu and my webhosting VPS as the clients, but I realize it may be easier (and make more sense) to just setup the rsync server on Ubuntu, and then an rsync client on Windows 7? Can anyone point me in the right direction? I'm thinking of using this guide: http://www.gaztronics.net/rsync.php It seems a bit outdated, though.

    Read the article

  • Nifty default controls prevent the rest of my game from rendering

    - by zergylord
    I've been trying to add a basic HUD to my 2D LWJGL game using nifty gui, and while I've been successful in rendering panels and static text on top of the game, using the built-in nifty controls (e.g. an editable text field) causes the rest of my game to not render. The strange part is that I don't even have to render the gui control, merely declaring it appears to cause this problem. I'm truly lost here, so even the vaguest glimmer of hope would be appreciated :-) Some code showing the basic layout of the problem: display setup: // load default styles nifty.loadStyleFile("nifty-default-styles.xml"); // load standard controls nifty.loadControlFile("nifty-default-controls.xml"); screen = new ScreenBuilder("start") {{ layer(new LayerBuilder("baseLayer") {{ childLayoutHorizontal(); //next line causes the problem control(new TextFieldBuilder("input","asdf") {{ width("200px"); }}); }}); }}.build(nifty); nifty.gotoScreen("start"); rendering glMatrixMode(GL_PROJECTION); glLoadIdentity(); GLU.gluOrtho2D(0f,WINDOW_DIMENSIONS[0],WINDOW_DIMENSIONS[1],0f); //I can remove the 2 nifty lines, and the game still won't render nifty.render(true); nifty.update(); glMatrixMode(GL_PROJECTION); glLoadIdentity(); GLU.gluOrtho2D(0f,(float)VIEWPORT_DIMENSIONS[0],0f,(float)VIEWPORT_DIMENSIONS[1]); glTranslatef(translation[0],translation[1],0); for (Bubble bubble:bubbles){ bubble.draw(); } for (Wall wall:walls){ wall.draw(); } for(Missile missile:missiles){ missile.draw(); } for(Mob mob:mobs){ mob.draw(); } agent.draw();

    Read the article

  • Need hard disk recommendation for linux home server.

    - by neotracker
    Hello, I'm planing to build a little linux homeserver. It will mainly be used for storage and maybe as an media pc. I plan to build a software raid5 with 4 1.5TB or 2TB hard drives. I already decided to use the Western Digital Caviar Green 1.5 TB drive, but then I read about some problems with the WD green series about many drives failing and that they are not recommended for raid anyway. Of course, I couldn't find much facts on the issues so I thought I just ask here ;-) What hard drives would you recommended for a software raid5 setup? As I only need it for storage, the whole thing doesn't have to be too fast. So I prefer a cheap price and silence to great performance.

    Read the article

  • Recovering an old website

    - by noah
    I have a client with an old website that somebody setup for him long ago. The guy who set it up is unreachable, so how do we go about trying to take it over? A WHOIS lookup got us some contact information, but I don't have great hopes for that (it hasn't been update in quite some time). The nameservers are ns1.theplanet.com and ns2.theplanet.com, and we will try calling them, but I don't expect we'll be able to get much from them. What are our options? Is there a way I can discover the registrar so we can try contacting them as well? EDIT: It would be sufficient if we could get control of the domain name or put in some sort of redirect to the new site. Either hosting was prepaid for quite some time, or someone else is still paying for it, so we don't care about that.

    Read the article

  • ZFS, dedupe and PST files

    - by Unreason
    I am interested to know what would be expected maximum dedupe ratio for a set of PST files. I have ~40G of pst files from ~15 usres with high level of duplication of attachments. I am running tests to see if I can have significant space savings if I store the data on ZFS with dedupe. For this purpose I have installed a test setup of Nexenta, but was wondering if someone here had already done this and what level of deduplication I might expect (or in another words how sensitive are pst files to block alignment and what are the parameters that can influence the ratio?). Initial test show very low dedupe ratio and I did find explanation that block level dedupe would not be efficient here and that byte level dedupe would be much better (and that it should be performed by application that is aware of internal organization), so I am just double checking here if someone have some more input. Otherwise I will probably be converting PST files to IMAP.

    Read the article

  • Access a PLESK website before propagation?

    - by RCNeil
    My web host uses Plesk and I want to know if there is anyway to access and view a website (with PHP and other processes being functional) without propagation of the domain name? I have found countless forums on this but they are all pretty old (circa 01-04) and involve either tricking your localhost or SSH commands and some even result in terrible security risks. I would like to access a web page directory through a browser and see it's contents while having the PHP processes carry out... before I propagate it's potential domain name. People claim this is pointless but during a site migration why on earth would you not test a site before propagating it? I'm looking for something similar to what cPanel offers i.e. http://IP.ADDRESS./~mydomain.com The only solution I could think of is storing the site in a new directory of an already functional site and then setting up databases and testing the site once it's complete. Once tested and working I should be easily be able to migrate the files to the "new" domain name's root directory and just setup a new databases and then propagate the domain name. I can't believe that Plesk V10+ still does not have a site preview method that includes PHP, JS, and Flash ability.

    Read the article

  • correct format for datetime appended to filename

    - by jhayes
    I'm trying to setup a batch file to execute a set of stored procs and dump the output to a timestamped text file. I'm having problems finding the correct format for the timestamp. Here is what I'm using osql.exe -S <server> -E -Q "EXEC <stored procedure> " -o "c:\filename_%date:~-0,10%_%time:~-0,10%.txt" The error I get is: Cannot open output file - x:\filename_Thu 06/25/_16:26:43.1.txt No such file or directory I can't find the documentation and I've played around with it but can't find the correct format.

    Read the article

  • How to do a 3-tier using PHP [closed]

    - by Ric
    I have a requirement from a client for my PHP Web application to be 3-tier. For example, I would have a web server on Apache in the DMZ, but it should NOT contain any DB connections. It should connect to a Middle server that would host the business objects but be behind the firewall. Then those objects connect to my SQL cluster on another server. I have actually done this using .NET, but I am not sure how to setup my stack using PHP. I suppose I could have my UI front tier call the middle tier using REST based web services if I create my middle tier as a second web server, but this seems overly complex. The main reason for this is advanced security: we can not have any passwords on the DMZ first tier web server. The second reason is scalability - to have multiple server on different tiers that can handle the requests. The Last reason is for deployment - it is easier if I can take one set of servers offline for testing before putting them back in production. Is there a open source project that shows how to do this? The only example I can find is the web server hosting files from a shared drive on another machine (kind of how DotNetNuke pretends to be 3-tier), but that is NOT secure.

    Read the article

  • emails getting sent with wrong "from" address

    - by Errol Gongson
    I have a postfix/dovecot system setup on Ubuntu 10.04, and it sends/receives emails fine, but when I send emails they are all from [email protected]. For example, I have a user called "info" and when I try to send an email using mutt from this Mailbox "/home/vmail/mydomain.com/info/Maildir" the email will send find but it will be from "[email protected]" and not "[email protected]". I have 3 mailboxes (/home/vmail/mydomain.com/root/Maildir, /home/vmail/mydomain.com/root/postmaster, and /home/vmail/mydomain.com/root/info) and they all send and receive emails. I am new to postfix and dovecot... can someone who knows what they are doing help me out on this one?? 30 myhostname = mail.mydomain.com 31 alias_maps = hash:/etc/aliases 32 alias_database = hash:/etc/aliases 33 myorigin = mydomain.com #have tried setting myorigin = mail.mydomain.com and still same problem 34 mydestination = mail.mydomain.com, localhost, localhost.localdomain 35 relayhost = 36 mynetworks = 127.0.0.0/8 37 mailbox_size_limit = 0 38 recipient_delimiter = + 39 inet_interfaces = all 40 html_directory = /usr/share/doc/postfix/html 41 message_size_limit = 30720000 42 virtual_alias_domains = This is from the aliases file postmaster: root root: [email protected]

    Read the article

  • How to set up virtual users in vsftpd?

    - by ares94
    I've read this tutorial: http://howto.gumph.org/content/setup-virtual-users-and-directories-in-vsftpd/ My configuration is as follow: ---vsftpd.conf--- listen=YES anonymous_enable=NO local_enable=YES virtual_use_local_privs=YES write_enable=YES connect_from_port_20=YES pam_service_name=vsftpd guest_enable=YES user_sub_token=$USER local_root=/var/www/sites/$USER chroot_local_user=YES hide_ids=YES ---/etc/pam.d/vsftpd--- auth required pam_pwdfile.so pwdfile /etc/vsftpd/passwd account required pam_permit.so I created file /etc/vsftpd/passwd and added users using htaccess. I tried to login but it didn't work: ftp 127.0.0.1 Connected to 127.0.0.1 (127.0.0.1). 220 vsFTPd 2.3.5+ (ext.1) ready... Name (127.0.0.1:root): user1 331 Please specify the password. Password: 530 Permission denied. Login failed. Everything seems fine accept the permission denied thing. How can I fix this?

    Read the article

  • How to ensure nvidia_current module loads during boot

    - by Aras
    I am running Ubuntu 12.10 on an Asus G75V laptop with nvidia gforce GTX 660M. I first run 12.04 on this machine and was able to install nvidia_current drivers from swat ppa: sudo apt-add-repository ppa:ubuntu-x-swat/x-updates sudo apt-get update sudo apt-get install nvidia-current This worked in 12.04 and after rebooting the machine my graphics where working properly. After upgrade to 12.10 however, the machine boots into a low resolution desktop which I can not really interact with. I suspect this is due to the driver not being loaded properly. To fix this, I have to switch to ctrl+alt+F1 session and manually load the nvidia_current module and restart the desktop manager: sudo modprobe nvidia_current sudo service lightdm restart Now everything works fine again. However, I would like not to have to do this every time I reboot the machine. I also dont want to hack an script to do this on load. Basically, if things are setup currectly, the nvidia_current driver which is installed should load. How can I make sure nvidia_current driver module loads properly when system starts?

    Read the article

  • PHP-FPM runs PHP scripts as root

    - by fwalch
    I have a web server setup using nginx and PHP-FPM listening on a Unix socket. In my php-fpm.conf, I have specified user = www group = www When I run ps aux, I can see that the php-fpm worker processes run as www; the php-fpm master process runs as root. However, I noticed that PHP scripts are executed as root; at least that's the output of echo get_current_user(); What can I do to run scripts as the www user? How can this even happen if the worker processes run as www?

    Read the article

  • Tracking the linux config with git: how?

    - by Pierre
    I'd like to track my linux configurations with git. My idea is to have a branch for each server. /etc is not the only one directory to be tracked (I won't git init in '/etc' ) As far as I could see, it is possible to init a git for a distant directory. I tried this: # mkdir -p /git/.git # cd /git # git --work-tree=/ --git-dir=/git/.git init Initialized empty Git repository in /git/.git/ 1) Creating a new branch before everything is not possible # git branch server1 fatal: Not a valid object name: 'HEAD'. 2) adding a file in master/HEAD is not possible # touch README.md # git add README.md fatal: Unable to create '//.git/index.lock': No such file or directory how should I properly setup git to track my system-config ? Thanks. P.

    Read the article

  • How to Access an AWS Instance with RDC when behind a Private Subnet of a VPC

    - by dalej
    We are implementing a typical Amazon VPC with Public and Private Address - with all servers running the Windows platform. The MS SQL instances will be on the private subnet with all IIS/web servers on the public subnet. We have followed the detailed instructions at Scenario 2: VPC with Public and Private Subnets and everything works properly - until the point where you want to set up a Remote Desktop Connection into the SQL server(s) on the private subnet. At this point, the instructions assume you are accessing a server on the public subnet and it is not clear what is required to RDC to a server on a private subnet. It would make sense that some sort of port redirection is necessary - perhaps accessing the EIP of the Nat instance to hit a particular SQL server? Or perhaps use an Elastic Load Balancer (even though this is really for http protocols)? But it is not obvious what additional setup is required for such a Remote Desktop Connection?

    Read the article

  • Windows Proxy Server advice

    - by Scott
    I have a webserver that currently has about 10 IP addresses. I have various clients that require a proxy server to route their internal traffic through. The load is not that great, so I'd like to have this ONE server act as a proxy server for 10 different clients, each client having their own unique IP on the server. The hardware is already setup, but I'm wondering what software solutions you guys recommend? I've looked at WinGate, Squid-Proxy, etc...but am pretty green with this. Maybe there's even a way to have Windows do this natively? I'm running Windows Server 2008, 32 bit.

    Read the article

  • Control scheduled tasks execution

    - by SJuan76
    We are a small shop. I am mainly a programmer, but due to being the only one that risks to manage our servers, the task has fallen on me (yet I it is still a secondary function so I cannot give it too much time). Over the course of years we have needed to create a decent number of .bat scripts that run as scheduled tasks in our servers (dump DB servers, SVN servers, copy files, etc.). Manually checking that everyone has proceeded ok is a time consuming task. I could get them to send an email on completion, but then I would get swarmed by lots of emails each morning. If I setup them to only e-mail on failure, I might miss the instances where the error causes the task to abort (or even not to start). Are there other alternatives? We are currently using Windows 2003 R2, but we are thinking of adding some Linux server soon, so a cross-platform solution would be best.

    Read the article

  • Nginx proxy to Apache - resolve HTTP ORIGIN

    - by Fratyr
    I have a server setup with nginx serving static content and proxy all PHP/dynamic requests to apache on 127.0.0.1 I'm building an API for my databases, and I need to allow clients by their origin (domain name), rather than just IP. Based on CORS rules. So when I send an HTTP header header("Access-Control-Allow-Origin: www.client-requesting.myapi.com"); from my API server, I have to tell it which origin I allow, otherwise client side requests won't work to my API due to same-origin policy. The question is how can I know which domain name (if any) called my API? What should be the nginx and apache configuration to pass the origin parameter? I tried to google, and all I found is some possible solution with mod_rpaf, but I wanted to be sure. Thanks!

    Read the article

  • sudoers security

    - by jetboy
    I've setup a script to do Subversion updates across two servers - the localhost and a remote server - called by a post-commit hook run by the www-data user. /srv/svn/mysite/hooks/post-commit contains: sudo -u cli /usr/local/bin/svn_deploy /usr/local/bin/svn_deploy is owned by the cli user, and contains: #!/bin/sh svn update /srv/www/mysite ssh cli@remotehost 'svn update /srv/www/mysite' To get this to work I've had to add the following to the sudoers file: www-data ALL = (cli) NOPASSWD: /usr/local/bin/svn_deploy cli ALL = NOEXEC:NOPASSWD: /usr/local/bin/svn_deploy Entries for both www-data and cli were necessary to avoid the error: post commit hook failed: no tty present and no askpass program specified I'm wary of giving any kind of elevated rights to www-data. Is there anything else I should be doing to reduce or eliminate any security risk?

    Read the article

  • Wordpress 3 mutli site install

    - by mike
    Hello, Trying to figure out if this is possible... My company has a cms product that was written in Java and we decided to use Wordpress to run blogs for our clients. Obviously, Wordpress does not run on tomcat(at least not by default) so we installed Pound(http://www.apsis.ch/pound/) on our server and have setup any Apache and Tomcat on different ports. When "/blog/" is requested, the request is directed to Apache. This works fine but we would like to use Wordpress multi site so that we can manage all the blogs from a single interface. We would also like the url for every site to be "/blog/" example: http://www.site1.com/blog/ http://www.site2.com/blog/ I'm thinking it would have to be done with apache??? Is it even possible? Thanks!

    Read the article

  • Accessing shared resource on local computer from users of different physical location

    - by Joe
    Sounds like easy task to some but such a difficult task for me to do... The main requirement for this task is to setup something in offices located on different locations, so (1st question) users are able to log on to the domain without VPN when they are in one of the offices. Additionally, (2nd question)how they can log on to the domain server when they are on the road like in a starbuck, what do they have to do to connect to domain after VPN connection are successful. also it's my understanding that, we can't share resource from computers on different network segments, (3rd question)what is the best solution to bridge/combine two network segments(two office in different locations) so computers of different location can see each other. Thank you in advance for any response.

    Read the article

  • Successful su for user by root in /var/log/auth.log

    - by grs
    I have this sorts of entries in my /var/log/auth.log: Apr 3 12:32:23 machine_name su[1521]: Successful su for user1 by root Apr 3 12:32:23 machine_name su[1654]: Successful su for user2 by root Apr 3 12:32:24 machine_name su[1772]: Successful su for user3 by root Situation: All users are real accounts in /etc/passwd; None of the users has its own crontab; All of those users are logged in the machine some time ago via SSH or No Machine - time varies from few minutes to few hours; no cron jobs are scheduled to run at that time, anacron is removed; I can see similar entries for other days and other times. The common part is the users are logged in when it appears. It does not appear during login, but some time afterwards. This machine has similar setup with few others but it is the only one where I see these entries. What causes them? Thanks

    Read the article

  • Private Git repo using Smart HTTP with LDAP authentification

    - by ALOToverflow
    I've been crawling the interwebz and getting my hands dirty for the last few days, but I can't seem to make it all work together. I managed to get a HTTP repo working with Ubuntu 10.04 over Smart HTTP (pull and push over HTTP) for a single repo. This means that I do the initial setup over SSH to the server (git init --bare) and after that the clients can pull and push to it (git clone http://servername/allgitrepos/repo.git). Unfortunately it's impossible to add a new repo without SSHing to the server and adding it manually) i.e. git push http://servername/allgitrepos/repo2.git (allgitrepos is available for everyone to read-write and execute) would fail talking about git update-server-info (which seems to be a general error message). So far the repository is anonymous, so I would like to authenticate using LDAP and also use the LDAP creds to make the git commit. So, how can I push new repos to the server and how can I use the LDAP creds to make the git commit. Thanks

    Read the article

  • Double try_files to solve the nginx's "No input file specified" issue

    - by Howard
    I am following the nginx's wiki (http://wiki.nginx.org/WordPress) to setup my wordpress location / { try_files $uri $uri/ /index.php?$args; } By using the above lines, when a static file which is not found it will redirect to index.php of wordpress, that is okay but.. Problem: When I request an non-existence php script, e.g. http://www.example.com/foo.php, nginx will give me No input file specified I want nginx to return 404 instead of the above message, so in the main fcgi config, I add the 2nd try_files location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include /etc/nginx/fastcgi_params; ... } And this worked, but I am looking if there are any better way to handle it?

    Read the article

< Previous Page | 443 444 445 446 447 448 449 450 451 452 453 454  | Next Page >