Search Results

Search found 19134 results on 766 pages for 'support contract'.

Page 638/766 | < Previous Page | 634 635 636 637 638 639 640 641 642 643 644 645  | Next Page >

  • A little guidance setting up FTP server authentication on Windows Server 2008 R2 standard?

    - by Ropstah
    I have a (clean) server running Windows Server 2008 R2 standard. I would just like to use it for serving a website and a FTP server through IIS. IIS is installed and serves my website propery. I have now added a FTP site but when I try to logon using my user/pass i get the following error: 530 User cannot login From this article (http://support.microsoft.com/kb/200475) I understand that these four causes can be pointed out: The Allow only anonymous connections security setting has been turned on in the Microsoft Management Console (MMC). Not the case The username does not have the Log on locally permission in User Manager. The user is in the Users group, however I'm not able to logon through RDP. I tried configuring this by following this article through GPMC however this only works when I'm logged in as a domain user on a domain controller which I'm not: I'm logged in as administrator The username does not have the Access this computer from the network permission in User Manager. Not sure what this implies...? The Domain Name was not specified together with the username (in the form of DOMAIN\username). Tried adding the server name: server\username, not working... I am an absolute server noob and I'd just like to be able to connect through FTP... Any guidance is highly appreciated!

    Read the article

  • OpenVPN - Cannot browse ipv4 websites

    - by user1494428
    I have set up an openVPN tunnel on my VPS (OpenVZ - Ubuntu 12.04). The problem is I can only browse websites which support ipv6 like google. http://whatismyv6.com/ reports me that I've an ipv6 adress, so I guess this is the problem. Server configuration: dev tun server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt ca /etc/openvpn/easy-rsa/keys/ca.crt cert /etc/openvpn/easy-rsa/keys/server.crt key /etc/openvpn/easy-rsa/keys/server.key dh /etc/openvpn/easy-rsa/keys/dh1024.pem push "route 10.8.0.0 255.255.255.0" push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" push "redirect-gateway def1" comp-lzo persist-tun persist-key status openvpn-status.log log /var/log/openvpn.log verb 3 Client configuration: client remote xx.xx.xx.xx 1194 dev tun comp-lzo ca ca.crt cert client1.crt key client1.key redirect-gateway def1 verb 3 I have configured NAT with this command: iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -j SNAT --to xx.xx.xx.xx Can someone explain me how I can make it works (forcing ipv4?) I had the same problem with another vps and I also tried on another client (All Windows 7).

    Read the article

  • Linux kernel - can't access sda16 & sda17

    - by osgx
    I can't access sda16 sda17 and higher partitions from my linux. This linux is rather debian (very old); kernel 2.6.23. So, I know that so old linux kernel can't access 16 partitions on single sata disk. What version of kernel should I use to be able access sda16, sda17 etc? I want to update only a kernel, not a whole Linux distribution. PS. There is an WindowsNT kernel which can access and format 16, 17 or higher partition, but my intention is to use sda16 and sda17 from linux (I want Linux Kernel). PPS: dmesg: sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sda: sda1 sda2 sda3 sda4 < sda5 sda6 sda7 sda8 sda9 sda10 sda11 sda12 sda13 sda14 sda15 > sd 2:0:0:0: [sda] Attached SCSI disk sd 4:0:0:0: [sdb] xxx 512-byte hardware sectors ... So, there is no mapping of sda16, sda17, ... to sdb. Sdb is the second physical hard drive.

    Read the article

  • iTunes Title Bar is Equal to the Location of the Library file?

    - by Urda
    I have never been able to get a clear answer on why the latest edition of iTunes does this. I have my entire iTunes library located in C:\itunes\ and the library data files inside C:\itunes\!library_info for backup purposes. However when version 9 of iTunes came out it went from having iTunes as the title, to !library_info. Anyway to get around this without moving my data files away? Annoying "feature" if that is what it is. Again Apple support and forums were of no help to me. Anyone have insight on this? System Info: Windows Vista x86 Ultimate, latest updates. iTunes Version 9.0.3.15 Screenshot: http://farm5.static.flickr.com/4072/4414557544_d0b25eb64c_o.jpg Flickr Page: http://www.flickr.com/photos/urda/4414557544/ UPDATE: I put a bounty on this, and would be open to hacking the registry or doing a custom config. Please help me fix my title bar!!! Update2: I would not like to move my library files out of itunes\!library_info to avoid them inter-mingling with my music library.

    Read the article

  • Aging SBS needs updates / Thoughts for one-off, off-line complete backup?

    - by tcv
    Hey guys, So, we checked out the status of an SBS 2003 at one of our more recent, spend-averse clients and found it to be woefully out-of-date. Scary out of date. I think it's running IE2. Ok, maybe not that far back. Anyway, I was thinking that I could use some kind of disk-imaging software to image the four IDE drives within and, in the event the server gets some kind of Update Induced Indigestion, I could completely restore. Usually my go-to software for this is Acronis, but my client will likely balk at a $500 price tag for a one-off backup with their server product. I had thought we could use the boot media from, say, Backup & Recovery 10 to take an off-line image of all the drives. According to their CHAT tech support, however, it will not work. I pressed for the technical reasons and they said they'd email me. They haven't emailed me. They still might. This server is running SBS 2003, pre sp2. It's got four IDE disks. One is a Basic disk, which contains the O/S. The others are bound as a dynamic disk. You might ask: "Don't they already have backup software?" They do! Backup Exec, a very low-end version that won't even do VSS. I don't know much about BE, but it seems to me that if the worst were to happen, it would mean building a new server O/S, installing BE (if the media is available), then restoring. Would it even work? I can take the system down for hours to do a backup and my goal here is a pretty dead-simple restore if the worst happens. Any and all suggestions are exciting. m

    Read the article

  • Salary Survey Entry Level network position [closed]

    - by will
    Hello, I started interning with a company about 5 months ago and for the past 7 months I have been a normal part time employee. This week I have a review, where I am hoping to get a raise. I started at $8 interning, and now I'm up to $13. What I am trying to figure out is how to survey what others are making in similar positions so I can take it to the review as a base number. here are my thoughts on my position right now. Review Thoughts My Qualifications: • Associates of applied Science - IT network Specialist • CompTIA A+ certification • CompTIA Server+ certification • CompTIA Network+ certification • Currently pursuing Cisco certifications • Junior status at Insert college Pursuing Bachelors in Information Technology with an emphasis in Networking. 3.4 GPA • 1 year of working at Insert Company. My contributions to Insert Company • Offering near fulltime through semester and fulltime through summer. • Ability to work after hours and on weekends • Developed and support helpdesk system • Set up and maintain Update server to keep desktop clients up to date • Deployed and maintain antivirus solution for end users • Assist with main projects such as SAN, Virtualization, and network survey. Any tips on determining an asking number would help. I was thinking $17-$18, am I way off here? • Migrating end user stations to Windows 7 (current project) • Developing imaging solution for Desktop PCs (current project)

    Read the article

  • How can I automatically synchronize a directory tree on multiple machines?

    - by Blacklight Shining
    I have two Mac laptops and a Debian server, each with a directory that I would like to keep in sync between the three. The solution should meet the following criteria (in rough order of importance): It must not use any third-party service (e.g. Dropbox, SugarSync, Google whatever). This does not include installing additional software (as long as it's free). It must not require me to use specific directories or change my way of storing things. (Dropbox does this IIRC) It must work in all directions (changes made on /any/ machine should be pushed to the others) All data sent must be encrypted (I have ssh keypairs set up already) It must work even when not all machines are available (changes should be pushed to a machine when it comes back online) It must work even when the /directories/ on some machines are not available (they may be stored on disk images which will not always be mounted) This can be solved for Macs by using launchd to automatically launch and kill (or in some way change the behavior of) whatever daemon is used for syncing when the images are mounted and unmounted. It must be immediate (using an event-based system, not a periodic one like cron) It must be flexible (if more machines are added, I should be able to incorporate them easily) I also have some preferences that I would like to be fulfilled, but do not have to be: It should notify me somehow if there are conflicts or other errors. It should recognize symbolic and hard links and create corresponding ones. It should allow me to create a list of exceptions (subdirectories which will not be synced at all). It should not require me to set up port forwarding or otherwise reconfigure a network. This can be solved by using an ssh tunnel with reverse port forwarding. If you have a solution that meets some, but not all of the criteria, please contribute it in the comments as it might be useful in some way, and it might be possible to meet some of the criteria separately. What I tried, and why it didn't work: rsync and lsyncd do not support bidirectional synchronization csync2 is designed for server clusters and does not appear to work with machines with dynamic IPs DRBD (suggested by amotzg) involves installing a kernel module and does not appear to work on systems running OS X

    Read the article

  • How do I set up dual monitors on Kubuntu 10.04 using the latest nVidia drivers with a 9800M video ca

    - by NoCatharsis
    I'm a Linux newb so please try to keep the lingo low-key. I installed the latest nVidia drivers on my laptop using the 9800M card. The laptop is a Gateway P-7805u and I'm connected to the second monitor using VGA. Also, before installing the nVidia drivers (and just using the basic drivers included with Kubuntu 10.04), basic dual monitor support worked, except I could not enable compositing features for some reason. So I thought the proprietary drivers would fix this. Several issues have arisen since installation: 1) I've clicked through all of the display settings to activate the second screen with absolutely no change. 2) When I try to apply settings and Save Configuration as the nVidia help suggests, I am told that I cannot save to the X.conf file. I assume this is due to innate permissions on my user settings, which I have no idea how to properly configure. 3) I have no idea where to go from here, as most of the fixes I found online involve Linux syntax and verbiage, to which I'm totally clueless after spending over half my life with Windows.

    Read the article

  • High memory utilization with firebird + windows server 2008 r2

    - by chesterman
    i have a Windows Server 2008 R2 (64bit) running a 64bit installation of Firebird 2.1.4.18393_0 in a 4GB phisical server. After a while, the task manager show that all memory is used, but the sum of the memory of all process does not stack to the half of the memory. Unfortunally, it's start swapping. Using RAMMAP, i can see that my entire database file is mapped into the memory. This only occours in windows server 2008 r2 and windows 7 64 bit. i can use firebird 32 or 64bit installations, doesn't matter. How can i prevent this? Why this only occours in w2k8r2 and w7? tks in advance ** UPDATE Aparently, this occours by the use of all memory by the file system cache. The microsoft documentations explain that this WAS a issue in windows xp, 2k3, vista and 2k8, but it was solved in 7 and 2k8r2. also adds that this issue is more common in 64bit hosts. (http://support.microsoft.com/kb/976618) there are some tools (DynCache, setcache and the Get/SetSystemFileCacheSize system calls from windows API) that allows me to fix a upper limit to the memory usage by the fscache, but the documentation argues that i should not do this in w2k8r2 because it will severely impact on the overall system performance. anyway, i tried, the performance remained the same shit and the use of the page file remained, although there is now more the 1gb of free memory.

    Read the article

  • Issue with exim4u

    - by bretterer
    I am using exim4u for a mail server on debian. Everything has been working fine until recently. I have not done anything to the server from the time it was working until now. I have a domain set up and is receiving and sending mail correctly. When i put a forwarding address in to a gmail address, I can still receive and send email from my webmail client but it never makes it to gmail. I have check logs and this is what I have found 2012-04-01 18:47:04 1SEPns-0000aN-Br DKIM: d=gmail.com s=20120113 c=relaxed/relaxed a=rsa-sha256 [verification succeeded] 2012-04-01 18:47:10 1SEPns-0000aN-Br H=mail-bk0-f43.google.com [209.85.214.43] Warning: X-Spam_score: -0.3 2012-04-01 18:47:10 1SEPns-0000aN-Br <= [email protected] H=mail-bk0-f43.google.com [209.85.214.43] P=esmtps X=TLS1.0:RSA_ARCFOUR_MD5:16 S=3424 id=CAGZkSKbYc7SJR+yXTgG8ubQvx4PNb0CwHG1DDKGeZ-qFiA$ 2012-04-01 18:47:11 1SEPns-0000aN-Br => /home/mail/mydomain.com/support/Maildir ([email protected]) <[email protected]> R=virtual_domains T=virtual_delivery 2012-04-01 18:47:12 1SEPns-0000aN-Br => [email protected] <[email protected]> R=dnslookup T=remote_smtp H=gmail-smtp-in.l.google.com [209.85.225.27] X=TLS1.0:RSA_ARCFOUR_SHA1:16 2012-04-01 18:47:12 1SEPns-0000aN-Br Completed I am not a mail server person so im not sure what everything here is saying. It appears to me that it is successfully sending mail to gmail though. I have checked my spam folder as well and nothing there either. If it would help to have some more information from my server, let me know because Im not sure what would be of help here.

    Read the article

  • What is the oldest hardware still in production use? How is it kept running?

    - by sleske
    In the spirit of the question What is your oldest hardware that still works?, I'd like to ask: What is the oldest hardware you know that is still in production use? And what challenges did you (or someone else) face in keeping it running (scarce documentation, no support, no spare parts available...)? Most organizations will retire / upgrade software and hardware after 5-10 years, but sometimes old software is kept running on old boxes, because it "just works". I once worked at a client site that was running a critical piece of (in-house developed) business software on a single server running HP-UX. The server was old (ca. 12-13 years), but fortunately still running without problems; however, getting spares would have been very difficult, and since software installation was undocumented, any significant system changes or even new hardware might have caused significant downtime and data loss. We eventually managed to replace it, but this is not always possible. I also read that many organizations still run decade-old mainframe hardware, particularly for highly customized systems controlling industrial machines or power plants. Which old hardware have you encountered? How did you manage these challenges? Related question: http://serverfault.com/questions/82467/should-old-servers-be-retired

    Read the article

  • Limiting bandwith on an Windows 7 machine

    - by Mihai Damian
    I need to limit the bandwidth on my Windows 7 x64 machine. In the past (on XP) I've been able to use NetLimiter for similar tasks. However for some reason I can't get it to work anymore. For lower limits the bandwidth tests are able to exceed the limit by 10-50%; higher limits seem to be ignored completely and the bandwidth tests report download speeds of over 10 times the speed I set. I'm using speedtest.net and some similar service from my ISP for these tests. Anyway, I don't necessarily need a program as complex as NetLimiter since I only need to throttle my machine's bandwidth, not a specific program's. In case you are wondering why in the world I'd want to cripple my Internet speed, there is a funny story behind this. Long story short, my modem gets random disconnects. Tech support comes in, says my Internet speed is abnormally high and I must be using some tools to somehow make it go faster than it's supposed to and this messes up my modem. I check the connection with another computer and it seems that my PC is the only one in my network that gets abnormal speeds. I reinstall my OS, speed looks normal at first, after I install the batch of 50 or so updates, it goes back to abnormally high speeds and the disconnect problems are not solved. Now I don't have a clue if the explanation the tech team gave me was just a strategy to lay the blame on someone else, but I was trying to give them the benefit of the doubt and see what happens if I really reduce my speed to their specification. Any help appreciated.

    Read the article

  • High speed network configuration

    - by Peter M
    Sorry if this seems to be a stupid question, I'm not sure how to specify what I want to know when checking google. I will have 2 or 3 devices pumping out data on a 100Base-T port. The combined data rate of all devices is about 15KB/S which exceeds the optimal 100Base-T channel capacity (12KB/S), but well within the realms of a 1000Base-T connection. Each device will be sending a burst of data in the form of an FTP transfer to a common, single host computer in a sequential manner ie: Device A establishes FTP connection and transfers data Device B establishes FTP connection and transfers data Device C establishes FTP connection and transfers data It may be that the A&B, B&C and C&A transfers overlap in the time domain to some extent. There will be minimal traffic going back from the computer to each device (in general what ever is needed to support the FTP transfers), and the network will be dedicated to transferring data between these devices and the host computer. Is it possible to use a switch to combine the multiple incoming 100Base-T streams into a single outgoing 1000Base-T stream? if so what features in a switch should I be looking for? Or would it be better to have 3 physical point-to-point 100Base-T dedicated connections between each device and the host computer? (thus having at least 3 physical Ethernet interfaces on that computer) Note that I can't change the interface on the devices, but I am free to choose the network and host computer configuration. Thanks for you help Peter

    Read the article

  • Windows based development environment: HyperV, VMWare, or VirtualBox on development machine?

    - by bleepzter
    I am a software engineer with a little bit of an informal "support" functionality... I am trying to figure out what is the best possible approach to employing virtualization technologies into our development process. Since the code we develop is server-centric, testing it often requires a VM with specific software requirements. I used to use VM Ware player (free version) to run my VM's until both of my laptops started exhibiting issues with corrupted windows 7 services and dying hard drives. All leads pointed to VMWare, which by the way seems to be a solid product if you pay for the Workstation edition ($300). On a side note, I have always been a fan of the Windows Server product line. I think it makes for one of the best development environments out there - it is highly scalable, highly reliable, and very efficient. So to be fair I replaced the drives of the laptops and installed Windows Server 2008R2, VS2010 Ultimate SP1, SQL Server 2008R2, TFS Server 2010 and all other tools and API's needed do do my work properly. So now I am stuck with a bunch of VMWare VMs. I don't want to repeat of what happened before, and I certainly don't want to bog down my machine with an inefficient hypervisor or services that are not needed. Futhermore the VMDK hard-disk format used by VMWare is not compatible with the VHD format of Hyper V. It is my understanding that converting from one format to the other can only happen by Microsoft System Center Virtual Machine which I have downloaded from MSDN and ready to install. I guess the question at this point is: Does SCVM run as another service in Windows? Is it a memory hog? What is a better virtualization technology - Hyper-V or Virtual Box in terms of efficiency ease of use and most importantly - memory footprint? (Keep in mind the development environment already has a ton of services running such as TFS Server, SQL Server, IIS, etc...) How would you advise to proceed at this point so that the VMs are still used in the test process? Thanks Martin

    Read the article

  • Installing Windows 7 from a USB Hard Drive.

    - by Mark Tomlin
    I have a Western Digital Passport External Hard Drive (320GB) that I want to partition to keep the data on, but use some of the free space to install Windows 7 onto my desktop computer. Microsoft has given me the Windows 7 Enterprise Edition ISO to download. I would like to take the External HD and partition it so I can fit the ISO image onto it. How would I go about doing this? Trying to use GParted to partition the external hard drive has caused a chicken or the egg problem. GParted can't see the drive unless it's mounted, and when it is mounted it will not allow me to do anything to the partition. When it's not mounted, GParted can't see the drive at all and as such can't do anything to the drive. Once the drive is correctly partition, how do I go about moving the ISO image Microsoft gave me to my USB External Hard Drive? Are there any special steps that I need to take? I am using Ubuntu 11.04 & GParted 0.7.0, on my Chromebook to do this. Any support would be appreciated.

    Read the article

  • vSphere Promiscuous mode only receiving packets one way from network switch

    - by steve.lippert
    We have two network switches, a POE switch (SwitchA) to power our phones / users computers and a non-POE switch (SwitchB for the rest network.) Each switch is setup to do port mirroring to support our VoIP recording system. SwitchA does port mirroring on specific ports if we need to record a user. SwitchB mirrors one port to monitor our work at home users (Internet comes in from managed router, to switch, back out to our firewall.) These two port mirroring setups feed into one vmware vSphere 4.1 server, it has four total physical cards. The other two NICs feed into an unmanaged switch for connecting to the rest of the network. Once into the vSphere server all network ports go into a vSwitch, and then one of the servers (Windows 2008 R2) sniffs them out and does its thing. Everything is working fine and dandy from SwitchB. But on SwitchA we only receive one side of the VoIP packets (going out to the phone, nothing coming in from the phone). Troubleshooting steps I have taken so far: I hooked up my laptop to the monitor port on SwitchB and I see both sides of the packets. I swapped which network interface is plugged into the monitor port on SwitchA. Because everything feeds into one vSwitch / vNetwork and both sides of the conversation arrive just fine from SwitchB I believe everything is configured correctly on the vSphere server/guest. What could be causing one way packets to arrive on my guest machine from only one interface, but not the other? Could a bad cable be causing the problems from SwitchB?

    Read the article

  • OARC's DNSSEC validating resolvers validate all my records but A records

    - by demize
    I have DNS set up with powerdns. It serves my DNS pretty well, and it AXFRs to other slaves. The slaves haven't yet updated to the most recent records, but that doesn't affect the validation, it would appear. Any record I can think of (AAAA, MX, TXT, even the CNAME for www) validates -- except for A records: dig @149.20.64.20 +dnssec www.demize95.com CNAME returns ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 5, ADDITIONAL: 7 while dig @149.20.64.20 +dnssec demize95.com A returns ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 5, ADDITIONAL: 7. The same happens with any other A record I have. I set up DNSSEC with pdnssec, and it does work for all the other records, but it's never validated for my A records. What's the problem here? Also, a side-note: I have to use ISC's DLV to create the chain of trust, since my domain registrar doesn't yet support sending the DS records to the com zone.

    Read the article

  • Implementing emailing (bulk & event based) features for my website.

    - by Kabeer
    Hello. For my upcoming social networking website, I am looking for suggestions on the best way to implement emailing. Here are some of my requirements and constraints: Requirements: - Should be able to send emails based on events (new registrations, change password, etc.), promotions (advertisements based on user consent), bulk mails (newsletters), reminders (profile updates), etc. I hope I got the point through. - Should be able to process faults (incorrect email address, mail-box full, etc) - User initiated invites (inviting friends to connect) Constraints: - As of now I am looking at Godaddy for hosting. Subsequently I shall move to, may be Amazon Cloud. Godaddy seems to be excruciatingly conservative (not bad always) when it comes to the ability to send email. - My tests on Godaddy so far have been discouraging. There is limit to no. of emails I can send and sometimes if emails carries special characters it throws strange exceptions like there was a virus affected attachment (even though I hadn't attached a thing). The replies from Godaddy support have been equally funny. My intent is not to portray Godaddy as wrong but I am looking for a work-around that frees me from said constraints. I am looking for a mechanism / service that is either free of very cost effective. I wonder how other sites address this. Mine is a .Net / Windows based application.

    Read the article

  • C# sends SQL data 4 times less from one box than from another

    - by Bobb
    W2003, .NET 3.5, SQL 2008 I have prod and UAT app servers deployed in 2 different data centres. I have a C# app which reads text file, parse the text and sends the data to the SQL in bulk. SQL server is in US and the app servers are in London (but in different places). All POPs have dedicated network connections. There is no public internet involved. When the app runs on UAT server I can see in Perfmon that the Send byte/sec is x4 higher than from production server. My estimate is that one server outputs at 1 MB/s and the other at 250 KB/s rate. My suspicion immediately is that there is a router on one of the DCs which shapes traffic or does QoS limitation on traffice from London to US. However support and Windows team and networkig team all are saying that there are no differences in neither networking config on the 2 DCs nor NIC config on the 2 app server... How to find out why is the networking bottlneck is 4 times tighter in one place than in the other? What can I do about it?

    Read the article

  • Windows 8 using as a webserver

    - by Jason
    I have a few hobby websites that I currently host on CentOS 6. Apache, mail serving, PHP, MySQL nothing special. In the past I used Windows XP to do this same task, for years, and I was OK. I switched to Linux and for the last few years it has been such a pain. updates break, certain apps only support certain distros without compiling from source. It prevents me from working on my hobby sites more because I am always fixing something. With Windows I locked it down, I run a hardware firewall and packet analyser, kept up on updates and A/V and never had a problem. I dont allow RDC from outside the local LAN, no FTP open, run OpenSSH on an obscure port.. I am considering switching to Windows 8 (since it is a cheaper license now that Windows 7) and running apache, HMailServer, PHP, MySQL, just like my CentOS install. My questions: I am not familiar with Windows 8, can the above be done like XP? No new security restrictions or the OS preventing this from happening? The machine is a Athlon 64-bit X2 with 32GB of RAM. Will Windows 8 see all of the RAM? Technically the machine came with Windows 7, and there is a serial number on it but I am sure I wiped away the Windows 7 recovery partition when I switched to Linux....

    Read the article

  • How to secure an Internet-facing Elastic Search implementation in a shared hosting environment?

    - by casperOne
    (Originally asked on StackOverflow, and recommended that I move it here) I've been going over the documentation for Elastic Search and I'm a big fan and I'd like to use it to handle the search for my ASP.NET MVC app. That introduces a few interesting twists, however. If the ASP.NET MVC application was on a dedicated machine, it would be simple to spool up an instance of Elastic Search and use the TCP Transport to connect locally. However, I'm not on a dedicated machine for the ASP.NET MVC application, nor does it look like I'll move to one anytime soon. That leaves hosting Elastic Search on another machine (in the *NIX world) and I would probably go with shared hosting there. One of the biggest things lacking from Elastic Search, however, is the fact that it doesn't support HTTPS and basic authentication out of the box. If it did, then this question wouldn't exist; I'd simply host it somewhere and make sure to have an incredibly secure password and HTTPS enabled (possibly with a self-signed certificate). But that's not the case. That given, what is a good way to expose Elastic Search over the Internet in a secure way? Note, I'm looking for something that hopefully, will not require writing code to provide shims for the methods that I want (in other words, writing forwarders).

    Read the article

  • Whats the best cloud backup solution for a small scale server envoirnment?

    - by nbv4
    I have a server that runs a postgres database that contains about 200MB of data. Currently I have a cron job setup on my home computer which: ssh's into my server runs a remote script which makes a backup of the database scp's that dump over to my local hard drive for storage. Each dump is 20MiB. does this every six hours (one months of backups is roughly 2GiB) The problem with this setup is that if my local machine goes down for whatever reason, no backups will be made. Also, I can't have the cron run from the server, because I can't have it scp'd to my local machine from my server (firewalls and all that crap). My local machine is running Ubuntu 10.04, and my server is Ubuntu 9.10 server edition. I looked into Ubuntu One, but currently it's gui-only. I also looked into dropbox, but it's a pain in the ass to get setup in linux without gui support. Amazon S3 looks good but it's not free (yet dirt cheap). Is there any other alternative that I should look into? I'd prefer something where I can just have my script dump the database into a directory, and have the backup service 'watch' that folder and sync accordingly. I can maybe also have my local machine sync to the cloud backup so I have even more redundancy, plus easy access to my backups for use in testing.

    Read the article

  • Postfix + Exchange + ActiveDirectory; How to mix them

    - by itwb
    My client has got many sub-offices, and one head office. The headoffice has a domain name: business.com All users in the many sub-offices need to have a headoffice email address: [email protected] Anyone not in the head office will need the email forwarded to an external email address. All users in the head office will have their email delivered to Microsoft Exchange. Users are listed in Active Directory under two different OU's: HeadOffice or SubOffice. Is this something able to be configured? I've done some googling, but I can't find any examples or businesses set up this way. Edit: Postfix will accept all email, will need to determine to forward the email to an external account or alternatively have it delivered to MS Exchange. I've done some reading about MS Exchange and that you can 'mail-enable' contacts for forwarding - but I don't know if each AD account requires an Exchange CAL? The end goal is to forward email to external accounts to sub offices or accept email for head office. Maybe I don't need to worry about Postfix to perform this task..... http://www.windowsitpro.com/article/exchange-server-2010/exchange-server-licensing-some-of-your-questions-answered "What about client access licenses (CALs)? You need one CAL per user who will connect to Exchange. Although it might not be 100 percent precise, I prefer to think of it as one CAL per mailbox; there are exceptions for users outside your organization, automated tools that use mailboxes, and so on. Exchange doesn't enforce this limit, so it's on you to ensure that you have the correct number of CALs for the set of clients you support."

    Read the article

  • Can Octopussy use messages other than syslog style?

    - by Lee Lowder
    I am currently exploring different options for a centralized log server. We use both Linux (Ubuntu 10.04 / 12.04, LTS for both) and Windows, though for this specific issue only Linux is relevant. I like the interface that octopussy has and it's feature list, but I am hesitant due to a few things. One of the biggest concerns I have is that it seems to be syslog only. The end goal is to have a centralized place for our devs and admins to be able to search through the logs generated by Apache, Tomcat and 70+ web apps spread out among a cluster, for both our prod and test environments. While I did see that octopussy has support for plugins, I haven't been able to find any sort of plugin repo or in depth guides as to what can be done with them. Does anyone know if plugins can be used to allow octopussy to non-syslog messages? Specifically log4j type log messages that may include multi-line stack traces and such. Also, is there a user community for this software, such as a mailing list or forum? I've been unable to locate any so far. Thank you.

    Read the article

  • How to speed up request/response to django using apache or another solution?

    - by jbcurtin
    Hey all, I'm mainly a developer, but every now and again I jump into the sys-admin position. For the most part I've gotten away with deploying php and python apps using apache. I write today because I'm starting to research faster alternatives to apache, yet still have some of the core features I require like put and delete methods and the ability to connect to a socket via apache. ( This I have not tried, but might be a nice whistle if I ever employ comet on my apps. ) As you've probably guessed, I use javascript exclusively for all my websites utilizing deep linking for SEO support. The main areas that I'm looking to increase performance is the connection between the django apps and the web server to the client response. Every day I work my best to keep the smallest memory foot print as possible, however I am getting to the end of my rope when it comes to working with apache. In general, keep in mind that I'm just starting this research so I'm looking more for material to read then solutions at this moment. My main questions: Am I missing something about apache that makes it faster then everything else? What would be a good server environment to deploy just static files one? What are some of the leading open-source and paid alternatives?

    Read the article

< Previous Page | 634 635 636 637 638 639 640 641 642 643 644 645  | Next Page >