Search Results

Search found 15099 results on 604 pages for 'runtime environment'.

Page 494/604 | < Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >

  • How long will a "safely stored" Solid-State-Drive (SSD) keep its data? (e.g. bank safety-deposit box)

    - by user31575
    Here's my usecase: once-and-only-once copy off photos/videos to an internal SATA Solid State Drive (SSD) put this drive in a well-ventilated, air-conditioned bank "safety deposit box" for safe keeping The question: How long can I safely store a solid-state-drive in such an environment? i.e. 0% bitrot, 100% success when "plugged in" Are some SSD drives more reliable than other for this usecase? (e.g. smaller size vs larger size, SLC vs MLC, different brands, etc) More fodder: I have read that solid state memory cards (e..g compactflash, or sd cards) have much longer durability than other media (DVD's, CD's, hard drives) for this usecase (guaranteed against bitrot/other dysfunction on the order of ~ a decades vs a year ). I don't know if this applies to "SSD hard drives". Copying to one 500Gb ssd vs 8 64gb flash drives is easier SSD SATA hard drives have no moving parts, but they have more "visible electronics" than a compact flash card. I don't know if this "visible electronics" can fail, i.e. in contr I know many will point to carbonite, other cloud backup stuff, but I like the simplicity of having physical copies and wanted to understand the risks/implications thanks,

    Read the article

  • Why is my global security group being filtered out of my logon token?

    - by Jay Michaud
    While investigating the effects of filtered tokens on my file permissions, I noticed that one of my global security groups is being filtered in addition to the regular system-defined filtered groups. My Active Directory environment is a single-domain forest on the Windows Server 2003 functional level. I'll call the domain "mydomain.example.com". I am logged onto a Windows Server 2008 Enterprise Edition machine (not a domain controller) as a member of the "MYDOMAIN\Domain Admins" group and the "MYDOMAIN\MySecurityGroup" global security group (among others). When I run "whoami /groups" from an elevated command prompt, I see the full list of groups to which my account belongs as expected. When I run "whoami /groups" from a regular, non-elevated command prompt, I see the same list of groups, but the following groups are described as "Group used for deny only". BUILTIN\Administrators MYDOMAIN\Schema Admins MYDOMAIN\Offer Remote Assistance Helpers MYDOMAIN\MySecurityGroup Numbers 1 through 3 above are expected based on Microsoft documentation; number 4 is not. The "MYDOMAIN\MySecurityGroup" global security group is a group that I created. It contains three non-built-in global security groups, and these security groups contain only non-built-in user accounts. (That is, I created all of the accounts and groups that are members of the "MYDOMAIN\MySecurityGroup" global security group.) There are other, similar groups of which my account is a member that are not being filtered out of my logon token, and this group is not granted any specific user rights in the security settings of this computer or in Group Policy. What would cause this one group to be filtered out of my logon token?

    Read the article

  • ping incorrectly pinging 127.0.0.1

    - by AlexW
    I've got an odd DNS issue. I'm running a dual ipv4/ipv6 environment on Linux. Pinging some sites results in ping pinging 127.0.0.1. e.g. #> ping authserver.mojang.com PING authserver.mojang.com (127.0.0.1) 56(84) bytes of data. 64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=1 ttl=64 time=0.045 ms 64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=2 ttl=64 time=0.043 ms 64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=3 ttl=64 time=0.058 ms --- authserver.mojang.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.043/0.048/0.058/0.010 ms Dig, however correctly returns the following: # dig authserver.mojang.com ; <<>> DiG 9.9.3-P2 <<>> authserver.mojang.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15800 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 512 ;; QUESTION SECTION: ;authserver.mojang.com. IN A ;; ANSWER SECTION: authserver.mojang.com. 5 IN A 54.235.119.47 ;; Query time: 14 msec ;; SERVER: 2001:4860:4860::8888#53(2001:4860:4860::8888) ;; WHEN: Sat Nov 09 15:34:40 GMT 2013 ;; MSG SIZE rcvd: 66 I'm confused! My web browser returns the correct website, and the same computer booted into Windows also works correctly.

    Read the article

  • Hosting multiple websites from home

    - by dean nolan
    I have just been accepted for Microsofts Wevsite Spark program which I mainly got for the tools, Visual Studio, Blend. I also have a few of my own websites, personal and a couple of business ones. I also work freelance and sometimes I would like a place to just put a demo up of a clients project. The websites I currently have are all on differnet hosting provders and domain registrars. The WebsiteSpark comes with Windows Server 2008 and SQL Server 2008. It would be really advantagous of me to have all these in one place but also so I have complete control over the database and the environment. So I am thinking over the next 4-6 months of migrating all this to my own server that I will host from home, or maybe even setup at home and then store in a proper datacentre. I was wondering what steps I should take and what to be aware of, specifically: 1) having all these different websites on one computer and having the url got to the proper place. 2) Cost effectiveness? Having the server in home as apposed to datacentre. Most solutions I see charge over £1000 a month to have a machine in datacentre. This is mostly for my own ease of management and shared hosting which I currently have is very limited configuration wise. Would getting a server in house be beneficial for then upgrading to the cloud? What measures should I take with my ISP? I know this is a lot I've asked but just even links to good articles would be good. Thanks

    Read the article

  • download and process a file by ftp at set intervals, with error handling, rescheduling and status messages

    - by compound eye
    I want to download a data file from a remote ftp server to my machine at regular intervals. Once the file is downloaded I want to call another script which will process the file. My development machine is mac os x, the eventual deployment environment is linux. What's would be the stock standard way to automate this? I know I can use cron to schedule curl to download and to run a script that will process the downloaded file at regular intervals, and I know could write a slightly more complex script or an application that would do this and add error handling, rescheduling and sending status emails. But one of my requirements for this project is to write as little custom code as possible, instead I should try to use standard, tried and true existing tools, and if I do have to write code, to try and write the most straightforward code possible. The reason for this is the code will potentially be installed on a large number of machines, all of which will need to be tweaked, customised and maintained by different people, long after I am gone from the project, so the intention is to use well documented, well supported tools as much as possible. This seems such a common task, there must be tools and scripts all over the internet, written by people who have carefully considered everything that could possibly go wrong when you need to download and process a file from a remote server at regular intervals, with error handling, rescheduling and sending status messages. Is that what Expect is for? What would you recommend? (the system will be downloading weather prediction data every six hours, so that the system can prepare in the event of bad weather warnings)

    Read the article

  • Same native and tagged vlan possible on Redhat?

    - by Chris Phillips
    Hi guys and gals, I'm looking at implementing a systems using a number of tagged and a native vlan connected to a server over a a/p bonded interface. The untagged vlan is for physical machine access, the tagged vlans are connected to bridges and then to QEMU VM's inside the machine. Hopefully this plan is fine, but I'm trying to implement a crippled version of this in a dev environment due to a lack of underlying network config in this location where I just have the same single vlan delivered to the machine on a tag AND plain. I'm nto clear if this is going to work (and that I should just be confident that it will work using different vlans) as I'm seeing odd things like a vm is arping out over the vlan out to the core switch, but the arp reply is coming back on the untagged interface. Now an ARP reply is unicast right? So it's a deliberate thing to send the ARP response on the untagged interface, and not a case that a broadcast response isn't being passed on the tagged side... i.e. there's some underlying logic pushing it that way. Something about the MACs somehow? This is on a CentOS 5.5 machine, vlan's from vconfig. (I've seen reference to the Linux mac-vlan project work, but that's not available here by default.) so 1) Should having the SAME vlan tagged and untagged work? 2) Will different tagged vlans to the untagged interface work nice and easily?

    Read the article

  • 150 TB and growing, but how to grow?

    - by seandavi
    My group currently has two largish storage servers, both NAS running debian linux. The first is an all-in-one 24-disk (SATA) server that is several years old. We have two hardware RAIDS set up on it with LVM over those. The second server is 64 disks divided over 4 enclosures, each a hardware RAID 6, connected via external SAS. We use XFS with LVM over that to create 100TB useable storage. All of this works pretty well, but we are outgrowing these systems. Having build two such servers and still growing, we want to build something that allows us more flexibility in terms of future growth, backup options, that behaves better under disk failure (checking the larger filesystem can take a day or more), and can stand up in a heavily concurrent environment (think small computer cluster). We do not have system administration support, so we administer all of this ourselves (we are a genomics lab). So, what we seek is a relatively low-cost, acceptable performance storage solution that will allow future growth and flexible configuration (think ZFS with different pools having different operating characteristics). We are probably outside the realm of a single NAS. We have been thinking about a combination of ZFS (on openindiana, for example) or btrfs per server with glusterfs running on top of that if we do it ourselves. What we are weighing that against is simply biting the bullet and investing in Isilon or 3Par storage solutions. Any suggestions or experiences are appreciated.

    Read the article

  • How to make quicksilver remember custom trigger

    - by corroded
    I am trying to make a custom trigger for my shell/apple script file to run so I can just launch my dev environment at the push of a button. So basically: I have a shell script(and some apple script included) in ~ named start_server.sh which does 3 things: start up solr server start up memcached start up script/server I have a saved quicksilver command(.qs) that opens up start_server.sh(so start_server.sh, then the action is "Run in Terminal") I created a custom trigger that calls this saved qs command. I did that then tested it and it works. I then tried to double check it so I quit quicksilver and when I checked the triggers it just said: "Open (null)" as the action. I set the trigger again and when i restarted QS the same thing happened again. I don't know why but my old custom trigger to open terminal has worked since forever so why doesn't this one work? Here's a screenie of the triggers after I restart QS: http://grab.by/4XWW If you have any other suggestion on how to make a "push button" start for my server then please do so :) Thanks! As an added note, I have already tried the steps on this thread but to no avail: http://groups.google.com/group/blacktree-quicksilver/browse_thread/thread/7b65ecf6625f8989

    Read the article

  • FreeBSD: problem with Postfix after updating LDAP

    - by Olexandr
    At the server I installed openldap-server, at this computer open-ldap client has already been installed. Version of openldap-client (2.4.16) was older then new openldap-server (2.4.21) and the version of client has updated too. OpenLDAP-client works with postfix on this server and after all updates postfix cann't start again. The error when postfix stop|start is: /libexec/ld-elf.so.1: Shared object "libldap-2.4.so.6" not found, required by "postfix" At the category with libraries is libldap-2.4.so.7, but libldap-2.4.so.6 is removed from the server. When I want to deinstall curently version of openldap-client, system write ===> Deinstalling for net/openldap24-client O.K., but when I start "make install" system write: ===> Installing for openldap-sasl-client-2.4.23 ===> openldap-sasl-client-2.4.23 depends on shared library: sasl2.2 - found ===> Generating temporary packing list ===> Checking if net/openldap24-client already installed ===> An older version of net/openldap24-client is already installed (openldap-client-2.4.21) You may wish to ``make deinstall'' and install this port again by ``make reinstall'' to upgrade it properly. If you really wish to overwrite the old port of net/openldap24-client without deleting it first, set the variable "FORCE_PKG_REGISTER" in your environment or the "make install" command line. *** Error code 1 Stop in /usr/ports/net/openldap24-client. *** Error code 1 Stop in /usr/ports/net/openldap24-client. Updating of ports doesn't help, and postfix writes error: /libexec/ld-elf.so.1: Shared object "libldap-2.4.so.6" not found, required by "postfix"

    Read the article

  • windows: force user to use specific network adapter

    - by Chad
    I'm looking for a configuration/hack to force a particular application or all traffic from a particular user to use a specific NIC. I have an legacy client/server app that has a "security feature" that limits connections based on IP address. I'm trying to find a way to migrate this app to a terminal server environment. The simple solution is for the development team to update the code in the application, however in this case that's not an option. I was thinking I might be able to install VMware NIC's installed for each user on the terminal server and do some type of scripting to force that user account to use a specific NIC. Anybody have any ideas on this? EDIT 1: I think I have a hack to work around my specific problem, however I'd love to hear of a more elegant solution. I got lucky in that the software reads the server IP address out of a config file. So I'm going to have to make a config file for each user and make a customer programs files for each user. Then add a VMware NIC for each user and make each server IP address reside on a different subnet. That will force the traffic for a particular user to a particular IP address, however its really messy and all the VM NIC's will slow down the terminal server. I'll setup a proof of concept Monday and let the group know how it affects performance.

    Read the article

  • Ubuntu 12.04 Very slow especially with Android Studio

    - by Nada
    I have an old laptop with the following specification: Memory: 485 MiB, Processor: Genuine intel CPU T2300 @ 1.66 GHz ×2, OS Type: 32 bit, Disk: 78.1 GB, I installed on it Ubuntu 12.04 LTS and I noticed that the overall system is very slow in responding. I tried to search about that in the internet and I found some articles talking about how to make Ubuntu 12.04 LTS run fast I applied all what they said including download LXDE desktop environment and then nothing different in the system response time. Then I need to develop some android applications so, I download Android Studio (Beta) 0.8.6. The problem became worse than before whenever I tried to open the Android Studio the screen is frozen for some minutes then it took time to download the projects and initialize the work space also, when I tried to move the cursor he is move very slowly. When I tried to run my first application on the AVD it took three hours and still not run yet. I delete the Android Studio and install it again several times, I was trying to solve the problem but still nothing change. Please if you have any suggestions that may help me make my laptop and Android Studio work faster I will appreciate it for you. Thank you in advance.

    Read the article

  • What games work well on MacBook Pro (i7/GeForce GT 330M) within VMWare Fusion?

    - by webworm
    I have a 15" MacBook Pro (2.66 i7 with 8 GB RAM) with the GeForce GT 330M 512 MB graphics card. I use it primarily for development (Mac/Web/Windows) though I would like to play the occasional game with my son who uses a desktop PC system at home. I prefer to use VMWare Fusion for virtualization rather than BootCamp for a number of reasons. Heat/Fan issues with i7 under BootCamp Prefer to retain virtual machine as single file rather than dedicated partition (easier to move a nd backup) I have heard that Windows support of the GeForce GT 330 in BootCamp is not all that good. So that being said I was wondering what sort of games I would be able to play within the Fusion environment running Windows 7. I have 8 GB RAM and usually dedicate 4 GB to the virtual machine. I don't expect to be able to play the latest FPS games such as BattleField: Bad Company 2 or Call of Duty, rather I am looking at games such a Total War II, Civilizations IV, Supreme Commander, and other RTS type games. I should mention the native screen resolution of my MacBook Pro is 1680x1050, which is what I would be most likely running the VM at (fullscreen). Thank you for any advice.

    Read the article

  • Mounting fuse sshfs fails when invoked by Cron on FreeBSD 9.0

    - by Tal
    I have a remote server filesystem that I'm attempting to mount locally on a FreeBSD 9 machine via FUSE sshfs, and Cron for a backup routine. I have ssh keys between the boxes setup to allow for passwordless login as the root user on the local machine. Cron is set to run the following script (in Root's crontab): #!/bin/sh echo "Mounting Share" /usr/local/bin/sshfs -C -o reconnect -o idmap=user -o workaround=all <remote user>@<remote domain>.com: /mnt/remote_server As root, I can run this script on the command line without issue, and without being asked for a password the share mounts successfully. Yet, when run by Cron the script fails. The path to sshfs is identical to the value of which sshfs Here is the email root receives from the Cron Daemon: X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=root> X-Cron-Env: <USER=root> Mounting Share fuse: failed to exec mount program: No such file or directory fuse: failed to mount file system: No such file or directory I'm stumped as to why I'm receiving No such file or directory in this instance. It further seems odd given that the paths appear to be correct. I've also attempted to compare the output of env on the shell with env inserted into the script. I don't see any environment variables that should cause this trouble. At bootup, FUSE reports its version as: fuse4bsd: version 0.3.9-pre1, FUSE ABI 7.8 Help me ServerFault wizards, you're my only hope!

    Read the article

  • How to connect the virtual networks of vmware guests running on different hosts?

    - by gyrolf
    In a test setup, we are running several virtual machines on a single vmware workstation host. All virtual machines are connected via a "host only" network. This runs fine up to 2 or 3 virtual machines (depending on the host hardware). To allow more virtual machines, we want to use more host machines. Details about the environment and applications: Host PCs are running Windows XP in a corporate intranet. VMware used is Workstation 6.5 Guests are running Windows Server 2003 All guests act as Web Servers One of the guests additionally acts as Windows File server, offering shared folders for the other guests to connect to. Restrictions: VMware guests shall not be visible from the intranet. Changes to the host PC are restricted by corporate policy. In the virtual network, no domain controller exists. All virtual machines are member of the same workgroup. Running the virtual network as NAT is possible. Port forwarding might be used if it does not conflict with ports used by the host PC. Looking for a solution, I found hints about using router or vpn software on the hosts, but without any details how to setup. (I found a similar question Sharing the network between 2 VMware hosts, but the answer was not sufficient for me.)

    Read the article

  • suPHP not working

    - by amarc
    OS: Ubuntu 10.04 etc/suphp/suphp.conf: [global] ;Path to logfile logfile=/var/log/suphp/suphp.log ;Loglevel loglevel=info ;User Apache is running as webserver_user=www-data ;Path all scripts have to be in docroot=/home ;Path to chroot() to before executing script ;chroot=/mychroot ; Security options allow_file_group_writeable=false allow_file_others_writeable=false allow_directory_group_writeable=false allow_directory_others_writeable=false ;Check wheter script is within DOCUMENT_ROOT check_vhost_docroot=true ;Send minor error messages to browser errors_to_browser=false ;PATH environment variable env_path=/bin:/usr/bin ;Umask to set, specify in octal notation umask=0077 ; Minimum UID min_uid=100 ; Minimum GID min_gid=100 [handlers] ;Handler for php-scripts application/x-httpd-suphp="php:/usr/bin/php-cgi" ;Handler for CGI-scripts x-suphp-cgi="execute:!self" some vhost in sites-enabled: NameVirtualHost *:8080 <VirtualHost *:8080> ServerAdmin ... ServerName ... ServerAlias ... AddType application/x-httpd-php .php AddHandler application/x-httpd-php .php suPHP_Engine on suPHP_UserGroup user user suPHP_ConfigPath "/home/user/etc" suPHP_PHPPath /usr/bin DocumentRoot /home/user/web/site.com/ ErrorLog /var/log/apache2/site.com-error_log CustomLog /var/log/apache2/site.com-access_log common <Directory /home/user/web/site.com/> Order Deny,Allow Allow from all Options +Indexes </Directory> </VirtualHost> But when I did nano /home/user/web/id.php and paste <?php system('id'); ?> in it, result I get is: uid=33(www-data) gid=33(www-data) groups=33(www-data) Have no idea what to do so I was hoping comunity could help ty.

    Read the article

  • Easiest way to do host name resolution with IPA?

    - by Luke
    We are currently using static LAN IP addresses for our internal non-public facing servers. We don't have DHCP configured. We're using Vyatta for our router and firewall. The firewall is configured to be zone based. We want to setup IPA for centralized authentication (LDAP+Kerberos). IPA is requiring resolvable host names. I want to avoid having to enter DNS records by hand. What is the most painless way to make host names resolvable that works with IPA in a Linux only environment? We arn't using anything to resolve host names now. Up until now we've been using static ip addresses and local users on each server. We've looked at BIND, DHCP (does that even solve the problem?), and multicast DNS. At this point we're not sure which solution would work best. Is there another option we haven't considered? Security is very important. We have multiple zones where each zone has very specific or no access to another zone. DNS for public domains is forwarded from Vyatta to our ISP's DNS server.

    Read the article

  • Single Sign On 802.1x Wireless - saying “Connecting to <SSID>”, hangs for 10 seconds, fails with “Unable to connect to <SSID>, Logging on…”.

    - by Phaedrus
    We are implementing WiFi on Windows 7 machines in our corporate environment. Machines should be able to log into the domain by WiFi as the Machine (Pre-Logon), and as the User (Post-Logon). We have everything working correctly except for 2 things: 1) Sometimes the login scripts don't run 2) The user VLAN is sometimes different than the machine vlan, and no DHCP renew occurs after user logon. I am clear that both these problems should be fixable by using the "Single Sign On" Option under the 802.1x Wireless Vista GPO, and setting the wireless to connect immediately before user logon and also by enabling "This network uses different VLAN for authentication with machine and user credentials" If I enable these GPO settings in a lab, the computer does authenticate & gets WIFI before the user logs on, so when the login box is displayed, it says “Windows will try to connect to ”, even though it is already connected (which should be ok?). Enter the user credentials and it goes to a screen saying “Connecting to ”, hangs for 10 seconds, fails with “Unable to connect to , Logging on…”. Desktop fires up and then the user re-authenticates with no problem as himself instead of the machine, but by that point, we defeat the point of the WiFi SSO “before user logon”. Also by that point, no DHCP renew seems to occur, and the user is still stuck with the wrong IP address for the new VLAN. When the “Connecting to ” screen comes up, there’s no indication on the AP or the Radius server that anything whatsoever is happening after credentials are entered until after the domain logon. Also with this policy enabled, sometimes windows hangs on a black screen indefinitely until I disable the Wireless NIC, so something is knackered for sure. What have I missed? Suggestions are much appreciated... /P

    Read the article

  • Why can't a PC with 2 network cards be accessed by hostname?

    - by lewis
    I set up PC with 2 network cards, connected to the same LAN. I can connect to this PC (e.g. by remote desktop) only via ip-addresses. Accessing by hostname does not work. Why is this the case? UPDATE: Full environment 1. PC with 2 hardware network adapters. 2. On this PC installed VMWare Workstation. Created 3 VM's, networked by "bridged" network setting in VMWare. 3. In LAN all ip-addresses given from DHCP. 4. Win2k8 on all hosts (both physical and vitrual). As result: 1. PC has 2 ip-address (e.g. 192.168.1.71 and 192.168.1.72). PC available in LAN by ip-addreses, but not avail by hostname. 2. VM's has own ip-addr on each (e.g. 192.168.1.73, *74, *75 etc). They are available from LAN by their ip's, BUT not by their hostnames. How can I access to PC and to VM's by hostname?

    Read the article

  • Is Ubuntu a bad distro for a standalone mysql database server?

    - by DhruvPathak
    I read an article here : http://www.mysqlperformanceblog.com/2011/12/08/which-linux-distribution-for-mysql-server/ On the other end there are Debian and Ubuntu. Both use tool called dpkg for package management. There isn’t a month that I log in to a system based on either distribution where there are no issues with packages consistency. Unfinished installations, unresolved conflicts are so common that it’s just beyond simple negligence. The packaging system is just not robust enough. Another problem is that one broken package may block you from installing or uninstalling anything else. Imagine that someone left system in such shape, you prepared for downtime, stopped MySQL and… error – text editor has not been properly installed, so you cannot upgrade MySQL either until the problem is fixed. In a stressful situation when downtime clock ticks – annoying at best We prefer Ubuntu server because of familiarity and Ubuntu also being development environment. Questions: Is Ubuntu used commonly in production for a mysql database server ? Is it worth the trouble ever to have one distro eg Ubuntu in web server, and another say Red Hat in database server ? Or Is a homogenous server pool a better choice ?

    Read the article

  • Finding the current user authenticated by basic auth (Apache)

    - by jtd
    When you log in through a basic auth page, is the username you authenticated as stored anywhere (on the server or client machine), maybe in an environment variable? Background: I have a common web administration page for an e-mail server and I'd like to know who is doing what. When a user successfully logs in via basic auth, I somehow want to be able to identify them and log their actions. So each time a request is submitted, I can write to a log file. The basic format would be: $username ran a $function against $useraccount so if a user changed someone's permissions, eg: Admin-Bob ran a permission change against User-Scott So if errors occur, I can easily trace back in the log file what actions lead to the cause. I tried checking the %ENV hash to no avail, any Ideas? I don't really want to get into PHP-like sessions, because that would mean scrapping my basic auth, which gives me a fine degree of control already. If I have to code something with sessions, I'd need to implement a system to block users after maximum tries and so on, which I don't really want to code. I think this is better geared towards serverfault because it pertains to Apache moreso than the programming language. Sessions can be done in a myriad of languages.

    Read the article

  • Export files to remote server using TortoiseSVN

    - by Matt
    Hi, I'm using TortoiseSVN to keep revisions of my code. When I commit changes, I take note of what files have changed and upload them to my server using FTP. Here's my workflow: Edit files on local computer (eg. files in C:\Users\Me\web) Commit changes to local repository using rightclick- TortoiseSVN- SVN Commit. Take the files, open FileZilla (FTP client) and upload the files to a remote server. I was wondering if there was a way in which I could omit step 3 from my workflow. Basically I would like the changed files to be automatically uploaded to the remote server when I commit a version to the repository. Information about my computer environment: Windows 7 Ultimate x64 with TortoiseSVN x64 Notepad++ text editor Files edited are PHP, CSS, JS, HTML, etc. Server is running Linux with PHP 5.2 and MySQL. FileZilla is used to upload files. I can connect to the server via SSH if that is needed. Thank you in advance.

    Read the article

  • Fix stubborn 'Setting locale failed.'

    - by user60129
    I have a very stubborn, well-known locale error on Ubuntu 9.10: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_TIME = "custom.UTF-8", LANG = "en_US.UTF-8" Tried the following: Added LANG=en_US.UTF-8 and LC_ALL=en_US.UTF-8 to /etc/environment Run apt-get install --reinstall locales (error: perl: warning: Falling back to the standard locale ("C"). /usr/bin/mandb: can't set the locale; make sure $LC_* and $LANG are correct) Run sudo dpkg-reconfigure locales. Result: Cannot set LC_ALL to default locale: No such file or directory, and then updates locales all locales including en_US.UTF-8 sudo locale-gen updates all locales successfully, including en_US.UTF-8 sudo locale-gen un_US en_US.UTF-8 gives no error nor other output In /etc/default/locale it says LANG="en_US.UTF-8" echo $LANG gives en_US.UTF-8 /var/lib/locales/supported.d/local says en_US.UTF-8 UTF-8 locale -a gives me: C en_AG en_AU.utf8 en_BW.utf8 en_CA.utf8 en_DK.utf8 en_GB.utf8 en_HK.utf8 en_IE.utf8 en_IN en_NG en_NZ.utf8 en_PH.utf8 en_SG.utf8 en_US.utf8 en_ZA.utf8 en_ZW.utf8 POSIX So well... I am pretty much out of options I can think of. Anybody any idea?? Thanks!

    Read the article

  • Painless deployment of a Django app (port from Drupal). Do I have to switch to a VPS?

    - by Monden
    I'm about to complete porting my Drupal based community site to Django. My Drupal site is hosted at a shared hosting (Dreamhost) for last 4 years, and stability & performance has been satisfactory. The site gets around 5k unique visitors with 70-80k page views a day. This will be my first deployment of a Django application and I'm not comfortable with managing my own VPS. I use Ubuntu as a dev. server, but I don't have experience with it at the production env. I have an unrelated internal CRM app (Django) that I host with Webfaction. However security and performance isn't an issue as it's only accessed by 5 people. Unfortunately, I don't have much time to learn and maintain a VPS at this moment. I would like to know if I can host a site with this much traffic at Webfaction's shared environment? How would performance differ in comparison to Linode or Slicehost? Google AppEngine isn't an option at the moment as I'll be using my current Postgresql database.

    Read the article

  • Office 2010 Trust Center settings: How to enable data connections in the "old" way?

    - by GSerg
    We're planning an upgrade Office 2003 - 2010 and have identified a big problem. In Office 2003, if the workbook you're opening contains a query table that fetches data from a data source automatically (upon file open or in certain intervals), then a security dialog pops up - whether you want to allow that. If you say Yes, the queries will refresh automatically when they need to. If you say No, the queries will not refresh automatically, neither on file open nor on time intervals, but you will be able to refresh any of them manually at any time by right-clicking and selecting Refresh. There is also a registry parameter to say, Don't display that dialog, just allow the queries. This is exactly what we want. On users' computers we have the registry parameter applied, so the users never see any dialogs. On developers' computers the parameter is not applied, so every time a file is opened the developer decides whether to allow the auto-refreshing for the current session. Usually the answer is No, because for developing, it is essential to not have quieres refresh when they want to, but instead, refresh them when the developer wants. The problem is that in Office 2010 which we are testing we can't find a way to achieve this functionality: The allow/disallow messages are now grouped into one yellow button, that either allows everything or disallows everything (including, say, macros, if macro security is set to "Disable, but ask"). If you don't click the yellow Allow button, the queries are disabled completely, not just for automatic execution. You cannot right-click and refresh a particular query -- doing that would summon a security dialog prompting for enabling queries, and if you say Yes, all queries in the document will be enabled for auto-execution and will start executing immediately. This sort of ruins our development environment. Is there a way to get the trust thingies in Office 2010 to work in the same way as before? Is there a yet another registry parameter to say, Prompt for auto-refresh, but allow manual refresh even when auto-refresh is disabled?

    Read the article

  • Cannot ssh into server

    - by revolver
    I am trying to SSH into a linux machine running ubuntu, but the interactive shell stuck somewhere and I can't key in anything. I am on Mac OS X Lion. This only happens when I am trying to access via an external IP. Local LAN SSH is working perfectly. macbook:~ user$ ssh -v -v user@serverip // i skipped the rest of the log, but I can paste it here again if needed. Authenticated to serverip debug1: channel 0: new [client-session] debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug1: Sending env LC_CTYPE = UTF-8 debug2: channel 0: request env confirm 0 debug2: channel 0: request shell confirm 1 debug2: fd 3 setting TCP_NODELAY debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 My terminal shell just hang after this, and I can't key in anything. I checked var/log/auth on the server and saw that the a session is being created and I had already logged in, but I don't see any responses on my client machine. I googled around and a lot of the solution had to do with the Broadcom wireless driver, but I am not even using one, so I am pretty clueless here. To give you more information, the linux machine is also running a web server, and I have no problem accessing the web server. Thanks. Any help is appreciated.

    Read the article

< Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >