Search Results

Search found 38034 results on 1522 pages for 'possible'.

Page 319/1522 | < Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >

  • postfix (for sending mail only) multiple domain setup

    - by seanl
    I have the following problem, I have a Centos 5.4 VPS hosting a few nginx sites (some static, some cakephp), I would like to be able to send email from each sites contact page through postfix to my google apps hosted email (different accounts for each site) so that apps can then send out an auto email to the person filling in the contact form etc I have a bare-bones postfix installation with the following added into the main.cf config file. from using this guide virtual_alias_domains = hash:/etc/postfix/virtual_alias_domains virtual_alias_maps = hash:/etc/postfix/virtual_alias_maps (both of these files have been converted into db files using postmap) I have configured DNS correctly for each site and setup SPF records. (I'm aware R-DNS will still reference my actual hostname not the domain name and cause a possible spam issue but one thing at a time) I can telnet localhost and the helo localhost so that I can send a command line email from an address in the virtual_alias_domains to an email in the virtual_alias_maps file which seems sends without giving an error but it is sending to my local linux account not the email address specified. my question is am i approching this the wrong way in terms of the virtual alias mapping or is this even possible to do in the manner im trying. Any help is greatly appreciated thanks. my postconf -n outlook looks like this alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 html_directory = no inet_interfaces = localhost mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost myhostname = myactual hostname mynetworks = 127.0.0.0/8 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550 virtual_alias_domains = hash:/etc/postfix/virtual_alias_domains virtual_alias_maps = hash:/etc/postfix/virtual_alias_maps

    Read the article

  • Can I replicate data between mySQL and SQL Server/SQL Azure?

    - by Ernest Mueller
    I have a replicated mySQL setup running happily on Amazon AWS, making user data available locally in various regions. Now I'm faced with an app that needs to go up on Microsoft Azure and I need to replicate the data over to there as well. So that's annoying. I am faced with several options: Replicate from mySQL to SQL Azure/SQL Server seems like it would be lovely - is this possible? I'd consider using a third party tool and paying $$ if I had to. We're not using anything complicated in the db feature set, it's just data in tables. Get mySQL working on Microsoft Azure - which seems really dicey at best. All the HOWTOs I can find say "this is possible but you really shouldn't try this for production apps." Go non-realtime and do syncs from mySQL to SQL Azure, which may be somewhat expensive and slower. Rip out all my mySQL on Amazon and use SQL Server there, which would make Baby Jesus cry. Has anyone gotten mySQL to SQL Azure/SQL Server replication or syncing working? Or have any other approaches (a NoSQL solution that replicates and might meet our but-we-need-to-join-some-tables needs that can easily be run on Amazon and Azure)?

    Read the article

  • Do all routers really must know all routes to every router?

    - by Philipili
    This is my complicated and long question. First let's talk about the context. Network topology: PC A --- RT A --- RT C --- RT B --- PC B (RT C has a WAN NIC connected to "the cloud") With this situation : PC A must send a packet to PC B Default routes direct packets to the cloud We haven't access to RT C's configuration RT C only knows how to join network A, not network B RT A knows about network B RT B knows about network A RT C's routing table: Destination NIC Gateway 0.0.0.0 WAN Cloud Network A LAN A RT A's WAN RT A's routing table: Destination NIC Gateway 0.0.0.0 WAN LAN A Network B WAN LAN A RT B's routing table: Destination NIC Gateway 0.0.0.0 WAN LAN B Network A WAN LAN B I would like to permit PC A and PC B to communicate, but I don't have access to RT C. Networks B and BC are new. Can PC A send a packet to RT B's WAN NIC (which is possible) and "ask RT B to direct the packet to PC B" ? I believe replacing RT B with a VPN server should do the trick, but I would like to know if it is possible to make it without establishing a new connection.

    Read the article

  • Command line solution for removing parts from a binary file?

    - by zsero
    I have a binary file and I would like to remove parts from. By removing I mean deleting those parts and thus making the file's size smaller. The parts would be between two ASCII strings. So, for example the file would look like this ........ start ABCD end ..... start EFGH end ..... start IJKL end ........... So in this file, I would like to search for strings "start" and "end" and remove the parts between them. The way I think I can do it is to lookup all the locations for "start" and "end" calculate ranges from that delete those parts Now I am using some GUI based Hex editor and I use the "Search All", "Select Range" and "Delete" commands, but I am sure it would be possible to solve it using some powerful command line hex/text editors. Do you know any solution for this problem which doesn't require using a GUI for looking up, copy & paste on clipboard, select range and delete commands but is just a few lines of command line? I am interested ini both Linux shell scripts or using some command line hex editors under Windows, or even Python scrips are welcome. Do you think it is possible to solve this problem just by a simple Regex replace? Are there any regex replace util which handles binary files well?

    Read the article

  • HP Pavillion DV6500 recovery disk failure

    - by Scott W
    I recently attempted to re-install Windows Vista on an HP Pavillion DV6500 using the factory recovery DVD's, but encountered a strange problem. When the recovery disk attempted to reformat the hard disk, it failed at 22%. The error message provided was not very informative, just the error code "0x400110020000 1005". A google search turned up some people with a similar problem who asserted that HP has been know to ship corrupted recovery DVDs. The recovery disk did manage to reformat the the recovery partition before failing though, so recovering from the partition is no longer an option. It would be possible to reinstall from an off-the-shelf retail copy of Vista and then pull the drivers from HP's website, but I don't have access to a copy of Vista, and it would really be outrageous to have to purchase a new OS when I have a perfectly valid license already. Thought about biting the bullet and upgrading to Windows 7, but my understanding is that without Vista installed I'd be unable to use the upgrade version, and be forced to purchase the more expensive non-upgrade retail copy (!). Can anyone suggest a possible solution to this Catch-22? I've run out of ideas.

    Read the article

  • Effective backup and archive strategy for database and linked files

    - by busyspin
    I am using Postgres to store a variety of application data for a webapp. Part of the application involves storing and retrieving user uploaded files. I am storing the files in the filesystem with some associated metadata in the database. I am trying to come up with a backup and archive strategy so that I can effectively backup and archive/restore the database and the linked files. Here are the things I want to accomplish. Perform routine backups that can be used for recovery from failures and which include all DB data and the linked files. Ideally, this backup would be done while the app is running. Live backup is certainly possible with a DB but I am not sure how to keep the linked files consistent with the database during the backup process Archive chunks of data as they become "old". These chunks must includes the database data plus any linked files. It should be possible to put the archived data back into production again. It would be ideal if it were easy to determine which ranges of objects were stored in each chunk. Do you have any advice for how to accomplish these goals? If the files were in the database as BLOBS these tasks would be much easier since normal database backup and restore functionality would handle this. I am not sure how to accomplish the same thing when file data is linked to database rows.

    Read the article

  • Setup staging with multiple SVN

    - by Kapil Sharma
    We are a startup, setting new environments for product to be released soon. Planned server structure with planned release flow is as shown in below image It ideally have a local server (or Staging server, shown in green) in local office, without public IP address and Production Server (Red) at Amazon EC2. Both local and production server have there own SVN copy. Management here want to update production server with production SVN and without providing its access to developers (including freelancers/contract employees). So for developers, there is a Local SVN on local server. Another purpose of local SVN to keep a copy of code on local server, which is under our direct control. Although there are some technical concerns like how will code at local server will be updated from local SVN and commit on production SVN but bigger question is, is that structure correct? Major requirement remain don't provide production SVN access to developers. What are other possible options to achieve that? Another minor question, if suitable here, if above structure is correct, is it possible for a SVN checkout to get updated from one SVN (Local SVN) but commit to other (Production SVN)? If yes, How? edit An answer has been accepted but for bounty, I'm still looking for answer Is that structure correct? Its pros/Cons? Technical solution is already provided by accepted answer.

    Read the article

  • How to diagnose occasional sudden resets?

    - by steve314
    I have a Windows XP system, and have recently upgraded by adding 2 1GB sticks of RAM to the 2x0.5GB already present. Since then, about once per day (the system is used 8+ hours per day), the system has suddenly and unexpectedly reset. On a couple of occasions, the system has frozen completely, only responding to the power button being held in for several seconds to force power off. Nothing at all ever appears in the system event log that might indicate a possible cause - everything seems to suggest business as usual. Sounds like faulty memory - but memtest86+ says otherwise. A full test, taking over an hour, found no issues. The next likely suspicion, then, is that I've knocked something while installing the RAM. Trouble is, everything I can think of to test seems fine. I've opened up the case and prodded a few things around, hoping to get better contact on connections etc, but there's no sign yet as to whether that has made a difference or not. I thought about a malware-related timing fluke, but again, so far as I can tell I'm all clear. All I can think of to add to my checklist (mainly of things that I can't easily check) is... The power supply is (1) only 350W, (2) not necessarily the best quality, and (3) powering a Prescott P4 640 3.2GHz. Could that be borderline overloaded or about to die? How do I check? Is it possible that the CPU isn't getting cooled properly? I haven't had the fan past normal tickover even doing video encoding, and the only sane temperature reading from SpeedFan is pretty steady at 36 celcius, so probably not. Any thoughts? Is there a standard procedure for diagnosing this kind of fault?

    Read the article

  • Securely executing system commands as sudo from PHP

    - by Aydin Hassan
    Is it possible? I have written a command line tool in PHP for creating new environments for our company. It creates system users, directories, databases, VHosts and restarts apache, amongst other things. These commands require sudo privileges. I thought it might be a nice idea to have a web-interface for it, to make it easier for other non-developers to use. The web app would be behind authentication. When running from the command line I just run sudo tool.php, obviously I can't do this from a web app. How could I do this securely? Giving the apache user sudo access seems silly, as this would means all sites hosted on the box (eg all our environments) would have sudo access. Is it possible to make this tool run under a different user? this user could have sudo privileges for only the commands I need? How do things like plesk and cPanel do this? Any thoughts?

    Read the article

  • Reducing video mode switching during Linux boot

    - by Zack
    When I boot up my desktop computer, which only has Linux on it, the video mode and/or console font gets switched four times: When GRUB starts, it switches from 80x25 text to a graphical mode so it can draw a pretty background behind its menu; GRUB then goes back to 80x25 text after I pick something from the menu; When the KMS driver for my video card loads, it switches to a much higher-resolution text mode (I don't know if this is a hardware text mode or not); Finally X starts and it goes graphics and stays that way. I think this last switch does not change the resolution of the video mode, only the graphicalness. I'd like to get rid of as many of these mode switches as possible. Ideally, when GRUB takes over from the BIOS it would go directly to the same high-resolution text mode that the KMS driver selects, and the display would stay in that mode till X starts and brings up graphics. I am under the impression that this is possible by mucking with the kernel command line and/or the GRUB console module load parameters, but I don't know the details. GRUB 1.98+20100706, kernel 2.6.32.15 using Nouveau video drivers. Distro is Debian unstable. Please no answers that involve recompiling anything or cobbling together bleeding-edge kernel/driver combinations, I don't care enough about this to go to that much trouble. EDIT: Tobu suggests setting GRUB_GFXMODE to the full pixel resolution of the monitor, and GRUB_GFXPAYLOAD_LINUX=keep to avoid the mode switch after the menu goes away. This does part of what I want, but winds up being worse overall. There's no mode switch after the menu, but there's still a painfully-slow screen repaint (I should probably just give up on GRUB's gfxmode, it's waaaay too slow at 1920x1200). More seriously, there's now a double mode switch when nouveaufb loads, along with fun-looking error messages in dmesg [ 5.923798] [drm] nouveau 0000:02:00.0: allocated 1920x1200 fb: 0x40250000, bo ffff8801ba5f4600 [ 5.923802] fb: conflicting fb hw usage nouveaufb vs EFI VGA - removing generic driver [ 5.923821] [drm] nouveau 0000:02:00.0: PFIFO_INTR 0x00000010 - Ch 1 ("PFIFO_INTR" message repeats 400+ times) [ 5.925609] Console: switching to colour dummy device 80x25 [ 5.925802] Console: switching to colour frame buffer device 240x75

    Read the article

  • Virtual Fileserver

    - by Sergei
    Hi, We are planning to move our production servers to the datacenter and virtualize remaining servers in the process.Datacenter will have HP blades with vSphere on top.Currentliy we are using Celerra NS20 as fileserver.Since datacenter is using HP kit and EVA 4400 as SAN, we cannot have Celerra there, as EMC supoprt for Celerra does not work for non EMC array. I have searched for possible options and one of them was to have HP NAS blade X3800sb instead of Celerra.However this seems like overkill for me.We are only using Celerra for about 100 users and 50 servers and I think having X3800sb could be waste of resources. The other option would be to have a virtual fileserver as a part of vmware environment in datacenter.We only need CIFS to be provided.The only option I can think of is Windows Storage server.We had a bad expirience with Windows servers used as fileservers ( memory leaks one thing) in the past and this was one of the reasons we moved to Celerra. What are the other options?We need something as reliable as Celerra with as many options as possible.For example , Celerra has per folder quotas, deduplication, dynamic volume allocation, automatic failover, VTLU, replication. Also we would need to replicate NAS data to the failover site.We could use block level replication , SAN-to-SAN, but this would mean wasted bandwidth, as we need only subset of folders to be replicated.We used CA XSoft for windows servers in the past and Celerra has option for Celerra replication. Thank you very much in advance, Please ask me if I missed any details!

    Read the article

  • Accessing Virtual Host from outside LAN

    - by Ray
    I'm setting up a web development platform that makes things as easy as possible to write and test all code on my local machine, and sync this with my web server. I setup several virtual hosts so that I can access my projects by typing in "project" instead of "localhost/project" as the URL. I also want to set this up so that I can access my projects from any network. I signed up for a DYNDNS URL that points to my computer's IP address. This worked great from anywhere before I setup the virtual hosts. Now when I try to access my projects by typing in my DYNDNS URL, I get the 403 Forbidden Error message, "You don't have permission to access / on this server." To setup my virtual hosts, I edited two files - hosts in the system32/drivers/etc folder, and httpd-vhosts.conf in the Apache folder of my WAMP installation. In the hosts file, I simply added the server name to associate with 127.0.0.1. I added the following to the http-vhosts.conf file: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www/ladybug" ServerName ladybug ErrorLog "logs/your_own-error.log" CustomLog "logs/your_own-access.log" common </VirtualHost> <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www" ServerName localhost ErrorLog "logs/localhost-error.log" CustomLog "logs/localhost-access.log" common </VirtualHost> Any idea why I can't access my projects from typing in my DYNDNS URL? Also, is it possible to setup virtual hosts so that when I type in http://projects from a random computer outside of my network, I access url.dyndns.info/projects (a.k.a. my WAMP projects on my home computer)? Help is much appreciated, thanks!

    Read the article

  • publickey authentication only works with existing ssh session

    - by aaron
    publickey authentication only works for me if I've already got one ssh session open. I am trying to log into a host running Ubuntu 10.10 desktop with publickey authentication, and it fails when I first log in: [me@my-laptop:~]$ ssh -vv host ... debug1: Next authentication method: publickey debug1: Offering public key: /Users/me/.ssh/id_rsa ... debug2: we did not send a packet, disable method debug1: Next authentication method: password me@hosts's password: And the /var/log/auth.log output: Jan 16 09:57:11 host sshd[1957]: reverse mapping checking getaddrinfo for cpe-70-114-155-20.austin.res.rr.com [70.114.155.20] failed - POSSIBLE BREAK-IN ATTEMPT! Jan 16 09:57:13 host sshd[1957]: pam_sm_authenticate: Called Jan 16 09:57:13 host sshd[1957]: pam_sm_authenticate: username = [astacy] Jan 16 09:57:13 host sshd[1959]: Passphrase file wrapped Jan 16 09:57:15 host sshd[1959]: Error attempting to add filename encryption key to user session keyring; rc = [1] Jan 16 09:57:15 host sshd[1957]: Accepted password for astacy from 70.114.155.20 port 42481 ssh2 Jan 16 09:57:15 host sshd[1957]: pam_unix(sshd:session): session opened for user astacy by (uid=0) Jan 16 09:57:20 host sudo: astacy : TTY=pts/0 ; PWD=/home/astacy ; USER=root ; COMMAND=/usr/bin/tail -f /var/log/auth.log The strange thing is that once I've got this first login session, I run the exact same ssh command, and publickey authentication works: [me@my-laptop:~]$ ssh -vv host ... debug1: Server accepts key: pkalg ssh-rsa blen 277 ... [me@host:~]$ And the /var/log/auth.log output is: Jan 16 09:59:11 host sshd[2061]: reverse mapping checking getaddrinfo for cpe-70-114-155-20.austin.res.rr.com [70.114.155.20] failed - POSSIBLE BREAK-IN ATTEMPT! Jan 16 09:59:11 host sshd[2061]: Accepted publickey for astacy from 70.114.155.20 port 39982 ssh2 Jan 16 09:59:11 host sshd[2061]: pam_unix(sshd:session): session opened for user astacy by (uid=0) What do I need to do to make publickey authentication work on the first login? NOTE: When I installed Ubuntu 10.10, I checked the 'encrypt home folder' option. I'm wondering if this has something to do with the log message "Error attempting to add filename encryption key to user session keyring"

    Read the article

  • Starting my own server - basic recommendations and questions [closed]

    - by Ilia Rostovtsev
    Possible Duplicate: Can you help me with my capacity planning? I'm planning to start my own high-performance server and then use collocation services for keeping it up and running. I'm planning to USE it for processing videos and keeping big video site up! (using FFMpeg, MENcoder and etc.) I just need recommendations on whether listed hardware is good enough and will work together well and fast enough. Do I need anything else (missed something). I remember about CPU coolers though! ;) I'm planning to use SSD drives so please tell me if it's going to work just as regular HDDs (but much faster)? Are they going to be used as RAID (is this possible for SSDs)? Here is what I would like to get: Intel ® Server System SR1600URHSR (Urbanna) or Intel® Server System SR1695WBAC 2 x Intel Xeon X5650 4 x 16Gb DDR-III 1333MHz Kingston ECC Reg (KVR13R9D4/16) 3 x (or maybe 4x) 480Gb SSD Intel 520 Series (SSDSC2CW480A3K5) Which server system would be better? Is listed hardware new/good enough and worth buying it at the moment? Should I probably take a look at something slightly more expensive but more up to date and powerful, may be? After all as software I would like to use CentOS 6 64 bit + WHM/CPanel? Any other suggestions on maybe cheaper and same/more powerful server management system but WHM? What most important points to keep in mind when starting/maintaining your own server?

    Read the article

  • Linux commands shows different results

    - by ClydeFrog
    I'm really having a hard time to process these results on my Ubuntu server. I have a major problem with my JBoss server where I get FileNotFoundExceptions along with "No space left on device" errors. And I thought "maybe I'm out of disk space", and used df command to figure out how much I have left: root@ubuntu1:/# df -h Filsystem Storlek Anvnt Tillg Anv% Monterat på /dev/mapper/ubuntu1-root 36G 13G 21G 38% / none 2,0G 192K 2,0G 1% /dev none 2,0G 0 2,0G 0% /dev/shm none 2,0G 64K 2,0G 1% /var/run none 2,0G 0 2,0G 0% /var/lock /dev/sda1 228M 23M 193M 11% /boot /dev/mapper/vgdata-lvdata 79G 9,2G 66G 13% /data And as you can see, I have plenty of space left. And I also checked if I'm out of i-nodes: root@ubuntu1:/# df -i Filsystem Inoder IAnv IFria IAnv% Monterat på /dev/mapper/ubuntu1-root 2346512 61992 2284520 3% / none 505380 773 504607 1% /dev none 507383 1 507382 1% /dev/shm none 507383 30 507353 1% /var/run none 507383 2 507381 1% /var/lock /dev/sda1 124496 230 124266 1% /boot /dev/mapper/vgdata-lvdata 10486784 233945 10252839 3% /data But then i used du: root@ubuntu1:/# du -s -h /* 7,5M /bin 23M /boot 19G /data 192K /dev 11G /eniro 5,3M /etc 112K /home 0 /initrd.img 183M /lib 0 /lib64 16K /lost+found 12K /media 4,0K /mnt 4,0K /opt du: kan inte komma åt "/proc/20452/task/20452/fd/3": Filen eller katalogen finns inte du: kan inte komma åt "/proc/20452/task/20452/fdinfo/3": Filen eller katalogen finns inte du: kan inte komma åt "/proc/20452/fd/3": Filen eller katalogen finns inte du: kan inte komma åt "/proc/20452/fdinfo/3": Filen eller katalogen finns inte 0 /proc 18M /root 8,2M /sbin 4,0K /selinux 8,0K /srv 0 /sys 40K /tmp 691M /usr 1,2G /var 0 /vmlinuz Notice that /data and /eniro are 30G combined! How is it possible? Do I have a memory leak somewhere? Or is it something else? ----- EDIT 1 ----- Ok, I figured out that /data has its own mount so it's not possible to combine /data and /eniro because they aren't on the same mount. But how come it says 9,2G on the first command when it says 19G on the third on directory /data?

    Read the article

  • Testing realistic loads for new versions of existing web app

    - by David Cournapeau
    Assuming I have a relatively complex web application, I am interested in testing performances of a new version using a traffic as realistic as possible. Traffic is relatively complex (session-based, lots of internal logic which depends on incoming requests). The webapp depends on many servers (databases, frontends, etc...). I can think of two basic directions: Recording every incoming request with its timestamp in production in a centralized manner and replaying it from N clients to reproduce a load as close as possible as the original. Issue: because we have many servers, getting the centralized log is not trivial. having a system duplicating requests to a staging area so that I could "plug" a dev version of my webapp to it at anytime without affecting the production. Issue: I have not found much information about it expect this, which suggests to me that may not be the best solution. OTOH, it is realistic by definition. What is the standard way of doing this kind of testing ? I did not find much information about load testing with complex, realistic traffic.

    Read the article

  • Find out when a system went down?

    - by Clinton Blackmore
    I have a Mac OS X 10.5 server, with a RAID set in it, that went down due to a power outage on Thursday, and the machine is not happily booting right now*. It is possible to find out when the machine went down, while not booted off the internal drive? (I'm booted off an external drive, waiting for the RAID sets to initialize.) Normally, I'd run last. The man page doesn't indicate that I can run it against a different startup volume. It looks possible to parse /var/log/utmpx, but I don't think it'd be worthwhile to try to do that from scratch for this one-off problem. * I'm still trying to figure out why it isn't happy, and may ask a follow-up question. Right now I can see that UserNotificationCenter crashed repeatedly early Thursday morning, and that securityd, mdworker, and ARDAgent crash shortly after startup [I think -- I want to verify when the box went up and down]. The login window does not come up right (I think it is crashing or not able to cope with a dead securityd). The box is supposed to be set to go down when the UPS tells it power is out; at the moment, I'm wondering if it went down, and turned back on multiple times! I sure hope not.

    Read the article

  • Wildcard DNS entry to match lang subdomain

    - by Adam Benayoun
    Hey, We have a website www.example.com pointing to x.x.x.1 and a system with multiples minisites all having subdomains.examples.com pointing to x.x.x.2 Basically what we have in place is a wildcard DNS entry who could basically match any possible subdomain, once reaching x.x.x.2, the apache vhost would intercept and basically redirect it to a php script who in turn would know what minisites to serve. On www.example.com however, we server contents which are translates in several languages, until few weeks ago you could switch languages by clicking on a flag and you'd be served with the translated content. The only problem is that the URL wouldn't change and SEO wise this isn't the best solution. Now I cannot change the way subdomain are handled (being redirected to x.x.x.2) since we have hundreds, if not thousands of minisites live. I have to come up with a solution to have language.example.com redirecting to x.x.x.1 and then a rewrite rule who would basically rewrite the fake subdomain into a URL in order to pass the parameter of the language to example.com On solution is to list all possible language as DNS entries right before the wildcard DNS entry. The other solution which I am almost sure is not feasible is to have some kind of regex in a DNS entry matching all subdomain with 2 letters ( en|es|fr|cn|cl etc... ) Any ideas?

    Read the article

  • need advice on data center move, communication with both facilities during transition

    - by Brian Roden
    We are beginning the process of moving to a new facility. Office and warehouse operations will both be moving, and we must get shipping operations up and running at the new location while continuing to ship from the old location. Our contract with some third-party warehouse tenants requires two business day turnaround (only weekends and holidays excluded), so we can't have major downtime during the move. We would like to keep our 172.16.60/61.xxx internal address space in use throughout the move. Is it possible to keep using this same internal range, and have our existing WatchGuard Firebox 520 and whatever router we get for the other location (preferably the same model) just treat both locations as one network, leaving our host IPs the same throughout the move? Renumbering the servers when they move isn't a big deal, but our wireless terminals for order picking in the warehouse have fixed IPs (and a fixed IP, non-DNS reference to the host they speak with) and would be a massive undertaking to reconfigure when the servers move (each device would have to be reconfigured at least 2 times -- some when we start using them in the new building and the host is still here, all of them in both locations when the host moves to the new building, and the rest when they finally make the move to the new building). We're trying to avoid that if possible.

    Read the article

  • Visual Studio 2010 Beta 2, built-in font smoothing

    - by L. Shaydariv
    I've just installed Visual Studio 2010 Beta 2 onto my Windows XP to evaluate it and check whether it meets my preferences the way it did before. Okay, I've temporary defeated an urgent bug with a strange workaround (I could not open any file from the Solution Explorer), and it left bad memories to me. But however, it's okay. The first thing I've seen just opening the code editor was ClearType font rendering. Wow, so unexpectedly. I must note that I do not use standard Windows rendering techniques, but I still prefer GDI++, a font renderer developed by Japanese developers. (GDI++ allows to render the fonts in Mac/Win-Safari style over entire Windows.) Personally for me, GDI++ reaches the great font-rendering results allowing me to use the Dejavu Sans Mono font with really nice smoothing in Visual Studio 2008 (VS 2005 too, though VS 2005 crashes in this case). But GDI++ cannot affect Visual Studio 2010 Beta 2 text editor - it uses ClearType (right?), and it does not care about the system font smoothing settings. It could be an editor based on WPF, right? So as far as I can see, I can't use GDI++ anymore because it uses Windows GDI(+) but no WPF? So I've got several questions: Is it possible to disable VS 2010 b2 built-in ClearType or override it with another font smoother? Is it possible to install a Safari-like font renderer for Visual Studio 2010 [betas]? Thanks a lot.

    Read the article

  • Disabling Laptop (PB TJ-75) faulty card reader Linux

    - by Gab
    My problem comes from that my laptop [PB TJ-75] has a faulty Alcor card reader. It’s 100% sure, the device is dead and unusable whatever the OS is. It cannot be disabled in BIOS [latest: Vendor: Phoenix Technologies LTD Version: V1.26 Release Date: 05/04/2010]. If I could take it apart from the main board easily, and if with that, the system would never look again for it, I’ll be very happy! Is it possible, has anyone ever tried this? Or maybe, replacing the BIOS with a more open one, which let you disable the card reader. Does this exists? Here's what I've tried to disable it so far. In Win7, I choose ‘disable’ in device manager and that’s ok. If not, the device keeps on appearing and disappearing and lot of resources are used. In Lubuntu 13.04, I got extra boot time, with the msg:'sdb, assuming drive cache, etc.’ I tried other distros (isos booted by grub). I can boot Puppy, Gparted, and Redobackup apparently without any problem. I cannot boot Debian, live or install + tried Crunchbang and Tails. I got a loop :’usb device, scsi n+1 blabla‘. I tried "nousb", no result, I have blacklisted EHCI, no result, then usb_storage module, better boot time in Lubuntu, with just the message "...data transfer failed", better shutdown time too. But, no way to use usb storage medias. In Debian, it ends with BusyBox prompt. Is it possible to just disable that Alcor card reader? Does it have a specific module? Is there a special kernel boot option that I missed? Does it have something to do with kernel recompiling, and if yes, how to do with isos? Programming a driver which says everything is ok (out of my comprehension for the moment)? Disabling device by vendor id? What is the best way?

    Read the article

  • maximum number of connections Squid

    - by Isaac
    I have a Squid proxy server that controls all internet traffic for my network. I need a way to stop users from downloading big files (say 50MB) in my network. I banned some famous ports (e.g. torrent) but some downloads are possible by HTTP port. Obviously I cannot ban port 80! A simple solution is limiting maxmimum number of the simultaneous connections for each IP (e.g. 3 connections). It's possible in Squid with this config: acl ACCOUNTSDEPT 192.168.5.0/24 acl limitusercon maxconn 3 http_access deny ACCOUNTSDEPT limitusercon But this solution has really bad impact in web browsing, because any smart browser get different parts of a website by several connections simultaneously to speedup web browsing. But if we have a maximum number of connections, the browsers will fail to get some parts and the website will be shown partially and some parts/images/frames will not be shown. So, can we limit maximum number of persist connections? I think this policy will works: Specify Maximum number of connections that is alive for 10 seconds But Number of simultaneous connections for every IP is unlimited But how can we implement this policy when Squid? With which config? UPDATE: artifex and Tom Newton offered using a bandwidth-limiting approach to fight against downloaders. But bandwidth-limiting in Squid has a shortcoming: It's static and cannot dynamically change. So a person has a limited bandwidth not matter how many people are using internet (maybe nobody!) Also, this solution cannot help to stop people from downloading. They still can download but in a lower speed. But if we find a way to terminate persist connections (or any connection that is alive more than a specific time), downloading big files will be almost impossible (always there is some way!)

    Read the article

  • NAT vs public IP (and blocked ports)

    - by user1646166
    I have a problem with my ISP. They say that they don't block any ports and I have public IP, while I think these both statements are false. Before I talk to them again (which is really tough when my understanding of these terms is different than theirs) I would like to make some things clear. It seems like my computer is behind NAT (is it possible to have public IP and be behind NAT at the same moment?). When I check my IP, through some external server, and type that IP into browser I get a home page of some router (not mine). Isn't that a proof that my IP isn't public? Also, I have problems with making connections via some ports. E.g. when I'm trying to connect through some high port ( 1023) via SSH, it doesn't work. Is it possible that certain range of outgoing ports from my computer are blocked? Or is it simply because that my ssh client (PuTTY) can't receive incoming packets because of blocked incoming ports? To avoid some questions: it's not a problem with my router, I tried connecting my PC directly and it also didn't work, while having connected by 3G using phone with USB tethering, it does work. Thanks!

    Read the article

  • Cutting Ubuntu to the bone for Virtualbox VM

    - by user32853
    I've been looking around for a Linux variant which will install only the software I need rather than everything Ubuntu (for example) puts in by default. This is to create a virtual machine in Virtualbox which has bash, apache, python, perl, SQLite, openssh and a few other programs but nothing else. I'd prefer to go with Ubuntu if possible but another modern distro would do as well (I like using apt-get and yum rather than downloading/compiling etc). So far, I've tried: SuseStudio.com, which is probably the best so far. Pressing F4 to get the boot options on Ubuntu 9.10, but there is no minimal installation (I think there was once). Arch Linux, slightly confusing install procedure but I might go back and try again. Gentoo, started well but fairly soon the HD on the virtual machine went to 2Gb, even before the installation had started in earnest (I'd partitioned the disks is all). I realise there are various "small" Linuxes around like Puppy, Feather, DSL, etc, but they seem to be aimed at desktop users or as a techie's toolkit, and I want a small-as-possible server distro which can be managed with tools like apt or yum or similar. TIA for any advice you can offer! -- Monty

    Read the article

  • configs for several sites in apache with ssl

    - by elCapitano
    i need to secure two different sites in apache. One of them should only be a proxy for a different server which is running on port 8069. Now one (which is natively included in apache) runs with SSL: <VirtualHost *:443> ServerName 192.168.1.20 SSLEngine on SSLCertificateFile /etc/ssl/erp/oeserver.crt SSLCertificateKeyFile /etc/ssl/erp/oeserver.key DocumentRoot /var/www/cloud ServerPath /cloud/ #CustomLog /var/www/logs/ssl-access_log combined #ErrorLog /var/www/logs/ssl-error_log </VirtualHost> The other one is not running and even not registered. When i try to access it, i get an exception (ssl_error_rx_record_too_long): <VirtualHost *:443> ServerName 192.168.1.20 ServerPath /erp/ SSLEngine on SSLCertificateFile /etc/ssl/erp/oeserver.crt SSLCertificateKeyFile /etc/ssl/erp/oeserver.key ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyVia On ProxyPass / http://127.0.0.1:8069/ ProxyPassReverse / http://127.0.0.1:8069 RewriteEngine on RewriteRule ^/(.*) http://127.0.0.1:8069/$1 [P] RequestHeader set "X-Forwarded-Proto" "https" SetEnv proxy-nokeepalive 1 </VirtualHost> My whish is the following configuration: 192.168.1.20 ->> unsecured local path to website 192.168.1.20/cloud/ ->> secured local documentpath from cloud 192.168.1.20/erp/ ->> secured proxy on port 80 for http://192.168.1.20:8069 how is this possible? is this even possible? perhaps cloud.192.168.1.20 and erp.192.168.1.20 is better?! Thank you

    Read the article

< Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >