Search Results

Search found 18096 results on 724 pages for 'let'.

Page 514/724 | < Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >

  • Drupal 7: One-time user account

    - by Noob
    I'm going to create a survey in Drupal 7 with the webform module, installed on a debian system which may be adapted in every way. The users (personally known, approx. 120) doing that survey will walk into a room and complete the survey in browsers on different computers. After that, they'll leave the room and other persons will enter, complete the survey on the same computers and so on. Each user may enter only one submission. The process needs to be anonymous, i. e. I mustn't have any idea of who did wich submission. My current solution is to generate random one-time-passwords and hand out one password per user (without noting who got which password). Within the survey there will be a password field where the one-time-password is entered. The value is checked by webform to be unique. I'll get the data via csv or Excel and verify the passwords manually in excel by comparing them to the list of valid passwords. The problem is: I don't like the idea of manually generating the password list, copying it to excel and doing a manual check. That's a good idea for one-time-use, but we're going to repeat the survey every once in a while. I'd rather generate one-time-logins (like user0001/fdlkjewf, user0002/dfrefnnr, ...) for each survey, hand them out to the users and let drupal/debian/whatever check whether a submission is valid or not. Do you have any idea how to batch-generate about 120 users with one-time-passwords in Drupal 7 and verify that each user may submit the form only once? Do you even have a better idea how to accomplish the task within the intranet? Thank you for your help.

    Read the article

  • postfix not sending domain mail to mx

    - by orlandoresorts
    I'm trying to get postfix to forward email to my domain which is hosted by gmail. As I don't have any users on my server nor do I want to. Here's how I have things set up.. LEt's say you and I have a domain called mcdonalds.com the registrar has mcdonalds.com MX records pointing to gmail. (everything works for like a year) Now we set up a server to host a website. Then we create a mail account called [email protected] and send mail locally from the server using roundcube. This works. We can send mail to cnn.com we can send mail to serverfault.com we can email any/everyone. BUT we cannot send mail to our own domain mcdonalds.com So I cannot email [email protected] I cannot email [email protected] I cannot email [email protected] It gives the error: SMTP Error (450): Failed to add recipient "[email protected]" (4.1.1 : Recipient address rejected: User unknown in virtual mailbox table). I'm guessing because it is looking at the local server to find the mailbox and it doesn't exist. So how to I tell the server for any mail going to mcdonalds.com for [email protected] to send to my external mail server and NOT to lookup on the local www box we set up with zpanel. Any ideas?

    Read the article

  • Apache and Virtual Hosts Problem on OS X

    - by Charles Chadwick
    I recently formatted and installed my iMac. I am running 10.6.5. Prior to this format, I had the default Apache web server up and running with several virtual hosts, and everything ran beautifully. After formatting, I set everything back up again, and now Apache is acting funny. Here is a description of what I have going on. My default root directory for the Apache Web server is pointed to an external hard drive. In my httpd.conf, here is what I have: DocumentRoot "/Storage/Sites" Then a few lines beneath that: <Directory /> Options FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> And then beneath that: <Directory "/Storage/Sites"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from All </Directory> At the end of this file, I have commented out the user dir include conf file: Include /private/etc/apache2/extra/httpd-userdir.conf And uncommented the virtual hosts conf file: Include /private/etc/apache2/extra/httpd-vhosts.conf Moving on, I have the following entry in my vhosts file: <VirtualHost *:80> DocumentRoot "/Storage/Sites/mysite" ServerName mysite.dev </VirtualHost> I also have a host record in my /etc/hosts file that points mysite.dev to 127.0.0.1 (I also tried using my router IP, 192.168.1.2). The problem I am coming across is, despite having PHP files in /Storage/Sites/mysite, the server is still looking at /Storage/Sites. I know this because in the DocumentRoot contains a php file with phpinfo() (whereas the index.php file in mysite has different code). I have tried setting up other virtual hosts, but they are still doing the same thing. Also, "NameVirtualHost *:80" is in my vhosts file. I saw as a solution on another thread here. Doesn't seem to make a difference. Any ideas on this? Let me know if this is not enough information.

    Read the article

  • Use WSUS without client configuration

    - by sc911
    Hello *, is there any way to let client-PCs use the local WSUS-server without having to configure them? What we need is a system to update PCs before they are delivered to the users. So the WSUS-server is accessible only within our lab, not later on at the users place. We'd like to use WSUS because it will fasten up the download very much. And we don't like to modify the clients as those changes might be forgotten to remove and then at the users place no update will be possible. So the easiest way would be, if one could redirect the normal Microsoft update, but I'm pretty sure that this will not be possible as this update will not be WSUS compliant. An other option I thought of might be, that the DHCP delivers an extra option telling the clients where to get the updates. But I could not find any information about this, so it looks like that this isn't possible too. So, is there any way? Or will it be easier to use a little script to change the WSUS-entries automatically? Regards sc911

    Read the article

  • Is this hard disk dead?

    - by Korjavin Ivan
    Not sure, is this right site for this Q, but let me try Last time i have problem with hard disk. Sometimes its do strange sound, and i get it from logs: $dmesg | grep ata4 [29409.945516] ata4.00: exception Emask 0x10 SAct 0xf SErr 0x90202 action 0xe frozen [29409.945529] ata4.00: irq_stat 0x00400000, PHY RDY changed [29409.945538] ata4: SError: { RecovComm Persist PHYRdyChg 10B8B } [29409.945546] ata4.00: failed command: READ FPDMA QUEUED [29409.945562] ata4.00: cmd 60/30:00:56:22:5f/00:00:00:00:00/40 tag 0 ncq 24576 in [29409.945573] ata4.00: status: { DRDY } [29409.945580] ata4.00: failed command: READ FPDMA QUEUED [29409.945594] ata4.00: cmd 60/18:08:8e:22:5f/00:00:00:00:00/40 tag 1 ncq 12288 in [29409.945605] ata4.00: status: { DRDY } [29409.945611] ata4.00: failed command: READ FPDMA QUEUED [29409.945625] ata4.00: cmd 60/08:10:46:02:66/00:00:00:00:00/40 tag 2 ncq 4096 in [29409.945635] ata4.00: status: { DRDY } [29409.945641] ata4.00: failed command: READ FPDMA QUEUED [29409.945656] ata4.00: cmd 60/80:18:ee:04:66/00:00:00:00:00/40 tag 3 ncq 65536 in [29409.945666] ata4.00: status: { DRDY } [29409.945679] ata4: hard resetting link [29413.976083] ata4: softreset failed (device not ready) [29413.976097] ata4: applying SB600 PMP SRST workaround and retrying [29414.148070] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [29414.184986] ata4.00: SB600 AHCI: limiting to 255 sectors per cmd [29414.243280] ata4.00: SB600 AHCI: limiting to 255 sectors per cmd [29414.243292] ata4.00: configured for UDMA/133 [29414.243324] ata4: EH complete [680674.804563] ata4: exception Emask 0x50 SAct 0x0 SErr 0x90a02 action 0xe frozen [680674.804575] ata4: irq_stat 0x00400000, PHY RDY changed [680674.804584] ata4: SError: { RecovComm Persist HostInt PHYRdyChg 10B8B } [680674.804603] ata4: hard resetting link [680678.840561] ata4: softreset failed (device not ready) Is this ata4 sata hard drive dead? Must i change it ASAP ? Need I specify more info?

    Read the article

  • How to set up a PRIVATE vimwiki on Dropbox.com

    - by Zongheng Yang
    Hi everyone, I assume those who are reading this page know what vimwiki and dropbox.com are and what they are for, so I might directly go into my confusion. The common way of setting a PRIVATE vimwiki on dropbox is simply let your vimwiki directories be under Dropbox folder (but not Dropbox/Public/ because it would be PUBLIC). Dropbox allows directly viewing html with dropbox.com/* url: for example a index.html can be accessed by url https://dl-web.dropbox.com/get/Wiki/html/index.html?w=bfead71a, being added after the file name a specified string, ?w=bfead71a. Hence, if inside index.html there is reference to A.html, which is located in the same folder index.html is in, it has to be accessed using some url like https://dl-web.dropbox.com/get/Wiki/html/index.html?w=SPECIFIED_STRING. But it is seemingly impossible to hack vimwiki in order to make the hrefs in converted htmls corrected in this way. Is there some approach that can resolve this problem? I hope I make myself clear. Had you any questions, please ask me for further explanations. Thank you!

    Read the article

  • Installed Paragon HFS+ for Windows 8, now my pc won't recognize the external firewire drive

    - by Steve
    I'm not incredibly knowledgeable about computers and I really need some help. Just got a Seagate external firewire drive this morning. I downloaded the necessary pc driver (Paragon HFS+ for Windows 8) through their website per the instructions that came with the drive. After installation, I restarted and the pc recognized the firewire drive just fine. About three hours into copying files from my pc to the firewire drive, it gave me an error and told me the files couldn't be copied. When I clicked to get out of the message, the computer crashed. After an hour of it trying to repair itself in safe mode, it restored me to an earlier version before the system crashed. Here's my current dilemma: The Paragon HFS+ is still showing up in my programs as installed, but the Device Manager is not recognizing the drive. When I try to uninstall and reinstall Paragon, it interrupts me with a message saying "The setup must update files or services that cannot be updated while the system is running" and basically gives me the finger. I have no idea what to do now, as it won't let me uninstall and reinstall Paragon, and I have no idea why it crashed my computer in the first place. Is there possibly another Mac - PC firewire driver I can try downloading instead? I really don't know what I'm doing and any help would be greatly appreciated.

    Read the article

  • 'Bug in Mailman version 2.1.12'

    - by davorg
    I'm working on setting up a server running Plesk 10.4.4 Update #13 on Centos 6.2. I've configured Mailman and now I want to set up some mailing lists. I've created a list in the Plesk control panel, but when I try to administer the new list (by visiting http://lists.[domain].com/mailman/admin/[listname] I see the following error: Bug in Mailman version 2.1.12 We're sorry, we hit a bug! Please inform the webmaster for this site of this problem. Printing of traceback and other system information has been explicitly inhibited, but the webmaster can find this information in the Mailman error logs. I see exactly the same error if I try to go to the list info page at http://lists.[domain].com/mailman/listinfo/[listname]. I would follow the instructions and look in the error logs, but I can't find them. I would expect to find a file at /var/log/mailman/error, but there's nothing there. My test list seems to work correctly. It sends all the expected email. It's just the web pages for the list that seem to be broken. Has anyone else seen this? Any suggestions for tracking down and fixing the problem? p.s. I think I've chosen the correct Stack Exchange site, but it this question would be better asked elsewhere, please let me know. Update: I got to the bottom of this, so I'm documenting the answer in case anyone else has the same problem. The fact that I couldn't find the error log was the clue. The problem was that the Mailman process didn't have permissions to create an error log. And it seems that if Mailman can't create an error log then it will respond to any web request with this error page. Creating an error log file (in /var/log/mailman/error) and giving it the correct permissions fixed the problem.

    Read the article

  • Automating first time login process in Windows Server 2008 R2 SP1 virtual machine

    - by George Durzi
    I have a set of Windows 2008 Server R2 SP1 Enterprise Edition virtual machines running in Hyper-V. The host server has 64GB of RAM and two SSD drives (one drive for the host OS, and the second one for the VMs). The virtual machines are as follows: Domain Controller: 4GB RAM Exchange Server: 4GB RAM Terminal Services: 50GB RAM We use this setup for a travelling training class where users remote desktop to one of the VMs - let's call it the Terminal Services or "TS" VM - where tools such as Visual Studio are installed. The students go through some labs on the TS VMs in Visual Studio. Overall, this setup works great. However, when users are collectively logging in for the first time, the VM really struggles to keep up while all the user profiles are created. It can take some users up to 10 minutes to login. The number varies from 30 to 40 students. A workaround to this would be to manually remote desktop to the TS virtual machine using all the accounts to ensure that the local profile is created in advance. I'm looking for a way to automate the first time login process on the TS virtual machine. I am envisioning iterating through the accounts in a certain Active Directory OU, and then somehow initiating a remote desktop session to the TS VM to log them in for the first time. Are there ways to do this? Thanks

    Read the article

  • Why we can change our IP address?

    - by iamstupid
    I across some websites that offer change of our IP addresses. It says, we can surf net anonymously, including changing our IP address and location. Most of the softwares are not free, so I have not try it out yet. But my question is, so, IP addresses will no longer be unique or valid for identify which computer were sending/request the information? I though only the ISP can determine our IP, so we can change our IP from some commercial softwares huh? Case: If I change my IP address, I go to a website which is supposed to be banned by my country, will the ISP let me pass the check and I will be able to browse the website which should be blocked? another question: From what I know, if we want to go to certain website, here is the flow: My Computer = ISP = Website = ISP = My computer I am not sure, if its the correct flow, but I am sure that, whichever website I want to visit, I need to go through my ISP, isnt it?. So if we change out IP, our ISP will record our new IP or the original(assigned-by-ISP) IP? Sorry for my bad English.

    Read the article

  • Virtualizing an Inline network appliance with VirtualBox (or VMWare)

    - by Tzury Bar Yochay
    My device, which is a Linux based IP in-liner is transparent to the network peripherals, that is, no IP address assigned to any of its interfaces. For the sake of the conversation, let's use ADSL connection as an example, while the device is inspecting the bi-directional traffic, the network is behaving same as if device was not there, attached to the wire (see Physical setup at the attached diagram). I wonder if I can enclosed that "device" within a Windows machine and have it operated virtually so it still seats inline between the ADSL router and the Windows netwroking interface by using virtual NICs, (or whatever their name is in windows), and inspecting the traffic, same as if it was on a separate physical device, the drawing under "Virtual Setup" in the attached diagram show what I am trying to achieve. Reading a bit on the VirtualBox docs, seems like binding the right side is relatively simple, perhaps I should have one network adapter set as Bridge Networking and VirtualBox will connect it to the physical NIC on the host machine, and network packets are exchanged directly, circumventing the host operating system's network stack (WinXP in my case). However, I have no idea how to achieve the left side of my diagram, which requires adding virtual NICs to windows and configure them correctly in a way to make that pipeline possible. I would appreciate any help. by the way, if that is not possible with VirtualBox but with other virtualization solution (e.g. VMWare), I would accept the other as well.

    Read the article

  • Why cache static files with Varnish, why not pass

    - by Saif Bechan
    I have a system runnning nginx / php-fpm / varnish / wordpress and amazon s3. Now I have looked at a lot of configuration files while setting up the system, and in all of them I found something like this: /* If the request is for pictures, javascript, css, etc */ if (req.url ~ "\.(jpg|jpeg|png|gif|css|js)$") { /* Remove the cookie and make the request static */ unset req.http.cookie; return (lookup); } I do not understand why this is done. Most of the examples also run NginX as a webserver. Now the question is, why would you use the varnish cache to cache these static files. It makes much more sense to me to only cache the dynamic files so that php-fpm / mysql don't get hit that much. Am I correct or am I missing something here? UPDATE I want to add some info to the question based on the answer given. If you have a dynamic website, where the content actually changes a lot, chaching does not make sense. But if you use WordPress for a static website for example, this can be cached for long periods of time. That said, more important to me is static conent. I have found a link with some test and benchmarks on different cache apps and webserver apps. http://nbonvin.wordpress.com/2011/03/14/apache-vs-nginx-vs-varnish-vs-gwan/ NginX is actually faster in getting your static content, so it makes more sense to just let it pass. NginX works great with static files. -- Apart from that, most of the time static content is not even in the webserver itself. Most of the time this content is stores on a CDN somewhere, maybe AWS S3, something like that. I think the varnish cache is the last place where you want to have you static content stored.

    Read the article

  • Give access to specific services on Windows 7 Professional machines?

    - by Chad Cook
    We have some machines running Windows 7 Professional at our office. The typical user needs to have access to stop and start a service for a local program they run. These machines have a local web server and database installed and we need to restrict access to certain folders and services related to the web server and database for these users. The setup I have tried so far is to add the typical user as a Power User. I have been able to successfully restrict them from accessing certain folders (as far as I can tell) but now they do not have access to the service needed for starting and stopping the local program. My thought was to give them access to the specific service but I have not had any luck yet. In searching the web for solutions the only results I have found relate to Windows Server 2000 and 2003 and involve creating security templates and databases through the Microsoft Management Console. I am hesitant to try an approach like this as these articles are typically older and I worry this method is outdated. Is there a better way to accomplish the end goal of giving the user permission to run the service and restrict their access to certain folders? If any clarification is needed on the setup or what we are trying to achieve, please let me know. Thanks in advance.

    Read the article

  • Use subpath internal proxy for subdomains, but redirect external clients if they ask for that subpath?

    - by HostileFork
    I have a VirtualHost that I'd like to have several subdomains on. (For the sake of clarity, let's say my domain is example.com and I'm just trying to get started by making foo.example.com work, and build from there.) The simplest way I found for a subdomain to work non-invasively with the framework I have was to proxy to a sub-path via mod_rewrite. Thus paths would appear in the client's URL bar as http://foo.example.com/(whatever) while they'd actually be served http://foo.example.com/foo/(whatever) under the hood. I've managed to do that inside my VirtualHost config file like this: ServerAlias *.example.com RewriteEngine on RewriteCond %{HTTP_HOST} ^foo\.example\.com [NC] # <--- RewriteCond %{REQUEST_URI} !^/foo/.*$ [NC] # AND is implicit with above RewriteRule ^/(.*)$ /foo/$1 [PT] (Note: It was surprisingly hard to find that particular working combination. Specifically, the [PT] seemed to be necessary on the RewriteRule. I could not get it to work with examples I saw elsewhere like [L] or trying just [P]. It would either not show anything or get in loops. Also some browsers seemed to cache the response pages for the bad loops once they got one... a page reload after fixing it wouldn't show it was working! Feedback welcome—in any case—if this part can be done better.) Now I'd like to make what http://foo.example.com/foo/(whatever) provides depend on who asked. If the request came from outside, I'd like the client to be permanently redirected by Apache so they get the URL http://foo.example.com/(whatever) in their browser. If it came internally from the mod_rewrite, I want the request to be handled by the web framework...which is unaware of subdomains. Is something like that possible?

    Read the article

  • Define custom escape sequences in terminal

    - by Ipkiss
    I would like to change the escape sequences used by some keys in my terminal. My goal is to define custom mappings in Vim (terminal version). In the following I use shift-space as an example, but I would prefer if the proposed solution could be generic. My current terminal (gnome-terminal) uses a simple space as escape sequence for shift-space, as can be seen by typing ctrl-v shift-space. A quick check with the true xterm shows the same behavior. I would like that the shift-space key combo generates another escape sequence (e.g., the one of shift-F30, which I would never use otherwise). So, how would I go about doing that? And is it really a good idea? Let me know if there are better alternatives... Note: I'm aware that this is only part of the problem: after the terminal sends a proper escape sequence for my keys, I still need to teach Vim what it means. But I think I know how to deal with that.

    Read the article

  • Is there a historical computer peripherals or accessories museum or even just a current list?

    - by zimmer62
    Thinking about all the unique and different peripherals I've owned over the years, from ISA capture cards, to parallel port controlled shutter glasses for 3d games. I've seen many many accessory or computer peripherals come and go. The nostalgia of these things is a lot of fun. I tried to find some sort of historical time-line or list but what mostly turned up is computers themselves. I'm more interested in the mice, scanners, the weird adapters that shouldn't exist, short run very rare products, strange devices from computer shows in the 80's and 90's... Hardware you might find in a geeks basement that would be completely useless now, but was the coolest thing around when it was new. An example would be a drawing tablet I had for my TI-99 computer, or the audio tape player accessory for a C64 which let you save files to audio tapes, An ISA card that did the same for PC's hooked up to a VCR. Remember that IBM-PC Jr upgrade kit, that added a floppy drive, more memory and the AT switch in the back? I'd love to find either a wiki, or a list that has already been assembled which contain many of these weird (or common) accessories. I've had so many over the years I suppose I could start a wiki here if such a list doesn't already exist.

    Read the article

  • How to kill tasks in Windows 7 when even Task Manager won't open or respond?

    - by endolith
    Occasionally one of my computers will get so bogged down that everything locks up, Ctrl+Alt+Del doesn't work, Task Manager won't open, or they work, but are opening so slowly that it will take hours or days to shut down other processes and regain control of the computer, etc. Is there a way to, for instance, force Task Manager to be highest priority so it always opens immediately with Ctrl+Shift+Esc even when some other process/driver is hogging the CPU? Is there some other program that can run in the background and open immediately like this? This question isn't about fixing "underlying problems". No matter how much memory you have, it's still possible for a rogue process to eat it all up and lock up the computer in page fault thrashing, hog the CPU, etc. This question is about how to take back control of the computer when that happens. Basically when these kind of lock-ups happen, I want to open some kind of task manager that pauses every other process and allows me to kill one of them, and then let everything resume so I can save my work, etc. Otherwise my only option is to hold down the power button. Antifreeze is supposed to do exactly what i want, pausing all other applications and starting a task manager to kill the offender, but in my testing, it actually does neither.

    Read the article

  • How could one track all emails sent from employees?

    - by Schnapple
    My client runs a small business. This business has a small number of employees. For various reasons, my client would like to be able to have a copy of all of the emails sent from their employees BCC'd to them. The net effect here would be similar to the access they would have if they hosted their email through Exchange but the business is too small to make this a feasible option. They are currently hosted through GoDaddy. I have not investigated it myself personally but apparently GoDaddy can do something along these lines for all incoming email but not for outgoing email. Is there a way to set up email accounts for a particular domain to where a specified admin user could be copied on all outgoing email? UPDATE: I've modified the title to reflect that it's employees not just users who are the goal here. Also I forgot to mention how they currently do email through GoDaddy - POP3. I think maybe IMAP is also possible through GoDaddy, not sure. And yes, the bottom line here is to basically emulate a feature of a larger-class platform through a smaller, cheaper platform. Opinion-only answers should probably be relegated to the comments. For the sake of argument let's say that any legal requirements have been met.

    Read the article

  • httpd.config Easy Apache WHM CentOS

    - by jessie
    First let me explain how I got to this situation. I run a Streaming Video Site. Videos are about 100-250MB in size at any given time there are 500 people on the site. So I guess that would make then static. Recently My site started getting really slow and the only way to fix it temporarily was to restart apache. Now there was no change in traffic that could have caused this. My site is not being attacked. My hosting company recomended to implement mpm_mod and suPHP. They did that by using Easy Apache in WHMS. Then everything was working fine but a little slow. I researched around and to my understanding that mpm will do that but be more table. I was told that installing FastCGI would speed things up just enough. Well that made everything worse. The site is slow and time's out. I used WHM and took off fastCGI but its still the same, it seems like everything i do as of now nothing changes. I even did a roll back on the htconfig file but that didn't work. I'm not sure how to fix this. and my hosting network guy wont be able to touch the problem until Tuesday. I have root access.

    Read the article

  • Router startup problem

    - by gfmoz
    I have problems with my Tilgin Vood Router. As I try to start my router by turning the power on (captain obvious), it generally doesn't work the first 3-4 times. This is getting very annoying. Five minutes after turning the power on the router's signal LEDs don't blink in the way they should do in a connected state. I can connect to my routers web configuration interface through my PC connected to it via LAN though I can't access the internet. It usually takes the router five minutes to get to the point where it should be connected to the internet but as it doesn't work the first times. So I turn on my router 3-5 times, let him work 5 minutes and then suddenly, after turning the pow*emphasized text*er off and on again it all works. The problem is regarding startup only, when I get it to work everything runs as smooth as a 1980-s text-based C++ game on a 3ghz machine. I also have to restart my PC too in order for everything to work. - How can I solve this problem? - Just leave the router turned on all time? I prefer a daily IP switch, though. - May the problem have something to do with my PC? There is another one connected to the router too and it doesn't work there either.

    Read the article

  • Can a website company that builds 4-5 websites a year afford dedicated hosting?

    - by Petras
    We manage about 30 websites that use shared ASP.NET SQL Server web hosting. These are typical small/medium business websites and they perform fine in this environment. Recently I was looking at VPS hosting in this thread http://serverfault.com/questions/128329/how-do-you-host-multiple-public-facing-websites-on-a-vps After contacting a provider in one of the replies I was told that VPS hosting is not recommended for 30 sites, even if they are small. The resource requirements might be too great even for VPS. So I should turn to dedicated hosting. The lowest cost dedicated hosting is $219 per month (see http://www.serverintellect.com/dedicated/pentiumdservers.aspx). But this is only for a single processor which seems too light for a machine running both IIS and SQL. In our office all the developers work on quad cores so I assume I’d really need the Quad Processor. However, this starts at $599 monthly. Now, I won’t be able to transfer all of our 30 sites to this machine. I’d only be able to transfer say 5 or 6. However, moving forward, I’d be able to host all future sites on this machine. This amounts to 4-5 per year. Let’s look at the economics. Shared hosting costs are typically $16.95 monthly (see http://www.crystaltech.com/dotnet.aspx). So here’s the dilemma First months costs: $599 First month revenue: 6x$16.95 = $101.7 Loss in first month: $497.3 First year costs: $599x12=$7188 First month revenue: 6x$16.95x12 + 5x$16.95x6(averaged) = $1728.9 Loss in first year: $5459.1 Clearly it is going to take years for this server to pay for itself. It just doesn’t seem economical! Am I missing something here, or is dedicated not the way to go with the amount of sites we build?

    Read the article

  • Using NFS for scalable PHP/MySQL web application

    - by Jeroen Moons
    Here's the situation: I have a PHP/MySQL web application that accepts user uploads (pdf files). From these pdf files' pages a preview image is made on the fly and presented to the web app's users. Some pdfs might be on the large side, most will be under 50 MB but some extreme cases could be as large as a few hundred MB. A little waiting for the preview image for large pdf files is acceptable but no more than a minute let's say. Everything is running on one server for now, but soon the app will hit the server's limit on both storage and processing power. My idea to solve the problem: To deal with this situation I had the idea of having one or more pdf processing servers as needed, and one or more file storage servers. These two types of servers are mounted to the server on which the actual app runs using NFS. The app could then use GearMan to delegate pdf processing tasks to these processing servers. The processing server can mount the storage server and read the file stored there, process it and write its output to that server. The servers I'm talking about will be amazon ec2 instances. The web app returns a link to the resulting pdf preview image on the storage server that was used which can then be used on the front end to show the image to the user. My question: I have zero experience with apps that use multiple servers, is this idea viable or is there a better way to do it? Is an NFS setup fast and reliable enough for this situation?

    Read the article

  • Comparing 2 (or 3 Files If Possible) "Line By Line"

    - by PythEch
    I want to find out the differences of 2 (or 3 files if possible) line by line. Diff utils can do this, however it gives inaccurate results. Because, 2 files have exact number of lines which is "134". But diff gives me "Added Lines" and "Removed Lines". However this is wrong, they have exact the same number of lines, there is no added or removed lines. The text files which I want to find differences of them, have only numbers written, maybe that's why that algortihm fails. I couldn't find any option to prevent that, however I may be wrong, I mean there should be an option for that, but again, I couldn't find. This is what I get (5am.txt vs 6am.txt, there is a huge problem): This is what I want (6am.txt vs 7am.txt, still has problems): But, first the first image still has this problem, at the last lines. Edit: After I figured out that there is no utility to do this, I handled myself. I almost did the same thing as what RedGrittyBrick have done. This script imitates diff utility so I (or you) can use it with diff2html. To use it with diff2html, just change line diff_stdout = os.popen("diff %s" % string.join(argv[1:]), "r") to diff_stdout = os.popen("script.py %s" % string.join(argv[1:]), "r") and name this script whatever you want: import sys f1=open(sys.argv[1],"r") f1_read=f1.readlines() f1.close() f2=open(sys.argv[2],"r") f2_read=f2.readlines() f2.close() changed={} first_c = "" for n in range(len(f1_read)): if f1_read[n]!=f2_read[n]: if first_c == "": first_c=n+1 changed[first_c]=n+1 else: first_c="" #Let's imitate diff-utils... for (x, y) in changed.items(): print "%d,%dc%d,%d" % (x,y,x,y) for i in range(x,y+1): sys.stdout.write("< %s" % f1_read[i-1]) print "---" for i in range(x,y+1): sys.stdout.write("> %s" % f2_read[i-1]) Final results:

    Read the article

  • Connect by Wifi to Sql Server from another computer

    - by Bronzato
    I try to connect by Wifi to Sql Server with Sql Server Management Studio from another computer but it failed. I have a computer with Windows Seven & Sql Server 2008 (lets say the server computer). Next to it, I have a fresh installed computer with Windows Seven & Sql Server Management Studio (let's say the client computer). What I do on the server computer: configure firewall by enabling port 1433 enabled network protocols (TCP/IP) inside Sql Server Configuration Manager checked "Allow remote connections to this server" on server properties in Sql Server Management. started Sql Server Browser restarted services (Sql Server Browser is stopped but I think it is not neccessary, isn't it?) Next, I successfully tested a ping on the port 1433 from my client computer with a tool named tcping (ex: tcping 192.168.1.4 1433). But I still cannot connect from my client computer to Sql Server on my other computer. Ok, something new on this problem: until now, I successfully connected to my "server computer" with Management Studio. What I do is typing the computer name in the server name field in the connection window of Management Studio. My previous (failed) attempt was to type the computer name followed by the instance of sql server (ex: COMPUTER_NAME\SQL2008). I don't know why I only have to type the computer name... Nevermind. Now my new challenge is to succeed connecting my VB6 application to this remote database located on my "computer server". I have a connection string for this but it failed to connect. Here is my connection string: "Provider=SQLOLEDB.1;Password=mypassword;User ID=sa;Initial Catalog=TPB;Data Source=THIERRY-HP\SQL2008" Any idea what's wrong? Thanks

    Read the article

  • Enterprise class storage best practices

    - by churnd
    One thing that has always perplexed me is storage best practices. Filesystems brag about how they can be petabytes or exabytes in size. Yet, I do not know many sysadmins who are willing to let a single volume grow over several terrabytes. I do know the primary reason behind this is how long it would take to rebuild the array should a drive fail. The more drives in a single LUN, the longer this takes and the greater your risk of losing another drive while the rebuild is taking place. Then there's usage reasons. Admins will carve out a LUN based on how much space they think needs to be allocated to the project. It seems more practical to me for the LUN to be one large array and to use quotas. I understand this wouldn't satisfy every requirement (iSCSI), but I see a lot of NAS systems (NFS) managed this way. I also understand that the underlying volumes can be grown/shrunk as needed quite easily, but wouldn't it be less "risky" to use quotas rather than manipulating volumes and bringing possible data loss into the equation? There may be some other reasons I'm missing, so please enlighten me. Can we not expect filesystems to ever be so large? Are we waiting for the hardware to get faster to cut down on rebuild times?

    Read the article

< Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >