Search Results

Search found 32247 results on 1290 pages for 'access modifiers'.

Page 486/1290 | < Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >

  • Resuming downloads in Firefox

    - by Kim
    Unfortunately, Firefox still has failed to add the option to resume downloads. I've ran into this problem SO MANY times, and in my previous searches I found posts saying Firefox was going to fix that. As of 3.6.3 they haven't. I just tried Free Download Manager (FDM), again, having the Firefox addon Flashgot use it. The download gets passed to FDM, and fails, giving the error message "access denied, invalid username or password." No password was required. The site I'm trying to get the file from is turbobit.net, which limits downloads speeds to 100kb/sec, and has a 59 second countdown before you get the link. I guess it's transparently using a password on their end. If I just download normally (save to disk) the download starts fine, but it fails after 30 minutes to 1 hour (always different), and my Wi-fi connection will stop briefly - and I have to start all over. So I will never be able to download a large file. I also tried DTA instead of FMD with Flashgot, and I get an "access denied" message in DTA. Again, I reloaded - waited the 59 seconds, and download w/Firefox, and the download starts fine. The failure message in the Firefox Downloads window is "source file at http... could not be read." Any help would be greatly appreciated. When is Firefox going to finally add the ability to resume downloads????? Is there some other software I haven't found using Google that will work?

    Read the article

  • Encrypted WiFi with no password?

    - by Ian Boyd
    Is there any standard that allows a WiFi connection to be encrypted, but not require a password? i know that (old, weak) WEP, and newer WPA/WPA2 require a password (i.e. shared secret). Meanwhile my own wireless connections are "open", and therefore unencrypted. There is no technical reason why i can't have an encrypted link that doesn't require the user to enter any password. Such technology exists today (see public key encryption and HTTPS). But does such a standard exist for WiFi? Note: i only want to protect communications, not limit internet access. i get the sense that no such standard exists (since i'm pretty capable with Google), but i'd like it confirmed. Claraification: i want to protect communcations, not limit internet access. That means users are not required to have a password (or its moral equivalent). This means users are not required: to know a password to know a passphrase to enter a CAPTCHA to draw a secret to have a key fob to know a PIN to use a pre-shared key have a pre-shared file to possess a certificate In other words: it has the same accessibility as before, but is now encrypted.

    Read the article

  • Google app mail hosting and windows 2008 server DNS configuration

    - by fyasar
    Hi There, I used to use shared hosting before and I created a google app account and I configured DNS records on shared hosting's contol panel according to google dns documentations. Everything was working until I switched to dedicated server. First I added DNS role to my new server and I configured whole DNS and NS stuff. I have mail address that mail.domain.com that was redirecting to google app email before. Rightnow, It's not possible to access to mail.domain.com addres from somewhere. But I'm accessing from a few point which located some computer on different network. I stacked in this problem. Please, see the below screen shots I checked on DNSStuff.com,everything is green. Also, I checked on InteliWiz, it seems correct And, Here is the my DNS records. I was accessing to my mail.domain.com address from everywhere, Now, I cannot access to mails from many places since 3 week. Where is the my mistake ? Any help would be appriciated.

    Read the article

  • Affordable combined Ruby/Rails/Redmine + Subversion hosting?

    - by Pekka
    I'm a self employed web developer and after nine years of hard work, I'm looking to become a bit more "vagrant" starting next year, do some much-needed traveling and a bit and work off and on, making use of one of the greatest advantages of a programming job: The ability to work virtually from everywhere. For that, I am looking for a reliable hosting company I can entrust my code to in the form of a number of Subversion repositories, and an installation of the Redmine project management tool. As my financial situation may vary during traveling, I am looking for something I can pay up front for a year or two, and is obviously not too pricey. I don't care where the company is located, as long as it's trustworthy and solid, meaning it's not likely to go out of business next month. Does anybody know good recommendations? Preferably from own, personal, good experience. I have looked at CVSDude / Codesion and while they are certainly great, they don't offer Redmine of course, and seem to be aiming toward bigger organizations mainly. What I would need: 2-5 Gigs of space minimum, freely distributable between SVN, and Redmine attachments Unlimited number of Subversion projects Access control (team members / checkout-only accounts / etc.) I don't mind configuring the svn settings on file basis myself I need the possibility to map a custom domain to the package that is hosted elsewhere Frequent backups and access to those backups through FTP or other means I have been running my own virtual server for this until now, but I don't want the hassle, especially on the security side, while I may not always have the internet connection to fix problems that may come up.

    Read the article

  • Set proper rights for sshfs mountpoint so it can be shared with samba

    - by CS01
    I have a domain hoster that provides access via SSH. My platforms are: Gentoo 2.6.36-r5 Windows (XP/Vista/7) I work on my Windows, I use Gentoo to do all the magic Windows can't do. Therefore I use sshfs to mount the remote public directory for my domain to /mnt/mydomain.com. Authentication is done via keys, so lazy me don't have to type in my password every now and then. Since I do my coding on Windows, and I don't want to upload/download the changed files all the time, I want to access this /mnt/mydomain.com via a samba share. So I shared /mnt in samba, all mounts except mydomain.com is listed on my Windows Explorer. My theories are: sshfs does not set the mountpoint uid/gid to something that samba expects samba does not know that it has to include the uid/gid that /mnt/mydomain.com has been set. All above is wrong, and I don't know. Here are configs and output from console, need anything else just let me know. Also no errors or warnings that I take notice of being relevant to this issue, but I might be wrong. gentoo ~ # ls -lah /mnt total 20K drwxr-xr-x 9 root root 4.0K Mar 26 16:15 . drwxr-xr-x 18 root root 4.0K Mar 26 2011 .. -rw-r--r-- 1 root root 0 Feb 1 16:12 .keep drwxr-xr-x 1 root root 0 Mar 18 12:09 buffer drwxr-s--x 1 68591 68591 4.0K Feb 16 15:43 mydomain.com drwx------ 2 root root 4.0K Feb 1 16:12 cdrom drwx------ 2 root root 4.0K Feb 1 16:12 floppy drwxr-xr-x 1 root root 0 Sep 1 2009 services drwxr-xr-x 1 root root 0 Feb 10 15:08 www /etc/samba/smb.conf [mnt] comment = Mount points writable = yes writeable = yes browseable = yes browsable = yes path = /mnt /etc/fstab sshfs#[email protected]:/home/to/pub/dir/ /mnt/mydomain.com/ fuse comment=sshfs,noauto,users,exec,uid=0,gid=0,allow_other,reconnect,follow_symlinks,transform_symlinks,idmap=none,SSHOPT=HostBasedAuthentication 0 0 For an easier read: [email protected] /home/to/pub/dir/ /mnt/mydomain.com/ options: comment=sshfs noauto users exec uid=0 gid=0 allow_other reconnect follow_symlinks transform_symlinks idmap=none SSHOPT=HostBasedAuthentication Help!

    Read the article

  • Users removing Administrator from files/folders permissions

    - by user64204
    We're running Windows Server 2003 R2 with Active Directory and are having an issue with network shares whereby users, in an attempt to secure their documents, remove everybody (including the Administrator account) from their files/folders permissions. Since the Administrator no longer has read permission to them, we can't even backup files manually as we get permission errors. One solution that we've found is to change the owner of the files and directories to the Administrator account. We can then change the permissions as we wish. The problem is that this has to be done manually so can't really be applied to an entire share. Another solution that we've tried is to use cacls as follows: cacls d:\path\to\share /C /T /E /G Administrator:F The problem with this is that we're still getting an ACCESS DENIED error on files/folders on which Administrator was removed. Q1: Is there a way to restore at least read access to all files/folders to the Administrator account in a recursive fashion? That would be for the short term. For the long term we're looking for a solution to prevent users from removing Administrator from files/folders permissions. Since we're going to migrate to Windows Server 2008 R2 soon we could wait until we've migrated to implement such solution if need be. Q2: Is there a way to prevent users from removing Administrator from files/folders permissions on Windows Server 2003/2008?

    Read the article

  • Set proper rights for sshfs mountpoint so it can be shared with samba

    - by CS01
    I have a domain hoster that provides access via SSH. My platforms are: Gentoo 2.6.36-r5 Windows (XP/Vista/7) I work on my Windows, I use Gentoo to do all the magic Windows can't do. Therefore I use sshfs to mount the remote public directory for my domain to /mnt/mydomain.com. Authentication is done via keys, so lazy me don't have to type in my password every now and then. Since I do my coding on Windows, and I don't want to upload/download the changed files all the time, I want to access this /mnt/mydomain.com via a samba share. So I shared /mnt in samba, all mounts except mydomain.com is listed on my Windows Explorer. My theories are: sshfs does not set the mountpoint uid/gid to something that samba expects samba does not know that it has to include the uid/gid that /mnt/mydomain.com has been set. All above is wrong, and I don't know. Here are configs and output from console, need anything else just let me know. Also no errors or warnings that I take notice of being relevant to this issue, but I might be wrong. gentoo ~ # ls -lah /mnt total 20K drwxr-xr-x 9 root root 4.0K Mar 26 16:15 . drwxr-xr-x 18 root root 4.0K Mar 26 2011 .. -rw-r--r-- 1 root root 0 Feb 1 16:12 .keep drwxr-xr-x 1 root root 0 Mar 18 12:09 buffer drwxr-s--x 1 68591 68591 4.0K Feb 16 15:43 mydomain.com drwx------ 2 root root 4.0K Feb 1 16:12 cdrom drwx------ 2 root root 4.0K Feb 1 16:12 floppy drwxr-xr-x 1 root root 0 Sep 1 2009 services drwxr-xr-x 1 root root 0 Feb 10 15:08 www /etc/samba/smb.conf [mnt] comment = Mount points writable = yes writeable = yes browseable = yes browsable = yes path = /mnt /etc/fstab sshfs#[email protected]:/home/to/pub/dir/ /mnt/mydomain.com/ fuse comment=sshfs,noauto,users,exec,uid=0,gid=0,allow_other,reconnect,follow_symlinks,transform_symlinks,idmap=none,SSHOPT=HostBasedAuthentication 0 0 For an easier read: [email protected] /home/to/pub/dir/ /mnt/mydomain.com/ options: comment=sshfs noauto users exec uid=0 gid=0 allow_other reconnect follow_symlinks transform_symlinks idmap=none SSHOPT=HostBasedAuthentication Help!

    Read the article

  • Set usergroup and persmissions ftp folder back to default

    - by OrangeTux
    I tried to create a new ftp user via the commandline. But I did something wrong and now I can access the server via FTP but I can't see any files. It doesn't make any sense wich user I'm using. ls -la drwxr-xr-x 13 root ftp 4096 2012-03-30 09:47 . drwxr-xr-x 7 web6 ftp 4096 2012-03-26 09:28 .. drwxr-xr-x 4 web6 ftp 4096 2012-03-26 13:31 actions drwxr-xr-x 2 web6 ftp 4096 2012-03-26 11:46 bin -rwxr-xr-x 1 web6 ftp 1520 2012-03-24 23:32 changelog.txt drwxr-xr-x 2 web6 ftp 4096 2012-03-26 13:30 css drwxr-xr-x 8 web6 ftp 4096 2012-03-24 22:43 external -rwxr-xr-x 1 web6 ftp 333 2012-03-26 15:12 .htaccess drwxr-xr-x 3 web6 ftp 4096 2012-02-27 15:07 images -rwxr-xr-x 1 web6 ftp 1606 2012-03-26 21:25 index.php drwxr-xr-x 2 web6 ftp 4096 2012-02-18 13:20 js drwxr-xr-x 2 web6 ftp 4096 2012-02-03 00:34 layout drwxr-xr-x 2 web6 ftp 4096 2012-03-29 23:35 library drwxr-xr-x 2 web6 ftp 4096 2012-03-30 09:47 log -rwxr-xr-x 1 web6 ftp 396 2012-03-24 15:04 menu.php drwxr-xr-x 2 web6 ftp 4096 2012-03-30 12:01 python drwxr-xr-x 2 web6 ftp 4096 2012-03-23 10:51 todo I can't see any dirs and files because I changed the groupowner or I the rights of the groupowner of the ftp dir. How can I set the ownership of the files back to default so I can access the files via FTP again?

    Read the article

  • Word 2007 won't run, tries to reinstall, fails with error 1402.

    - by eidylon
    Okay, this problem has been plaguing this computer for a while now. We tried googling, and none of the answers found helped to solve the problem. So, I am now posting the answer here for posterity. Office 2007 Home/Student edition was installed on the computer, running Vista (32-bit). One day, Word just up and stopped working. All the other programs continued to operate as expected. But every time you would click the icon for Word, it would pop up an install dialog, with a message reading "Preparing to install...". After a few minutes of the little progress bar going and going, it errors out, and gives error 1402, something to the effect of unable to access registry key HKEY_Local_Machine\Software\Classes\.wll\.... Searching around, every answer i found had to do with reassigning the permissions on this key, giving full rights to SYSTEM or to Everyone, and propagating the changes down to all sub-keys. When ever this was attempted though, it would tell us that we were unable to access the key due to permissions, even though we had run regedit as Administrator and are logged on with an administrative account. We also tried uninstalling Office and reinstalling it, as well as doing a repair install. Both these attempts also threw the same 1402 error. Also of note was that the executable for Word (winword.exe) was MIA and no longer to be found in the Office install directory.

    Read the article

  • Creating a pseudoterminal to make sudo happy

    - by larsks
    I need to automate the provisioning of a cloud instance (running Fedora 17) for which the following initial facts are true: I have ssh-key based access to a remote user (cloud) That user has password-free root access via sudo. Manual configuration is as simple as logging in and running sudo su - and having at it, but I would like to fully automate this process. The trick is that the system defaults to having the requiretty option enabled for sudo, which means that an attempt to do something like this: ssh remotehost sudo yum -y install puppet Will fail: sudo: sorry, you must have a tty to run sudo I am working around this right now by first pushing over a small Python script that will run a command on a pseudoterminal: import os import sys import errno import subprocess pid, master_fd = os.forkpty() if pid == 0: # child process: now that we're attached to a # pty, run the given command. os.execvp(sys.argv[1], sys.argv[1:]) else: while True: try: data = os.read(master_fd, 1024) except OSError, detail: if detail.errno == errno.EIO: break if not data: break sys.stdout.write(data) os.wait() Assuming that this is named pty, I can then run: ssh remotehost ./pty sudo yum -y install puppet This works fine, but I'm wondering if there are solutions already available that I haven't considered. I would normally think about expect, but it's not installed by default on this system. screen can do this in a pinch, but the best I came up with was: screen -dmS sudo somecommand ...which does work but eats the output. Are there any other tools available that will allocate a pseudoterminal for me that are going to be generally available?

    Read the article

  • How to make Virtualbox, OpenVPN, and Win2008 Web R2 like one another?

    - by Aquitaine
    Back with web developer guy wearing net admin hat. Hopefully this is an easy one. We have two servers on a public network at a hosted facility. Server A is our public-facing web server and server B is our database server. Both are running Windows 2008 Server R2 Web Edition. We want Server B isolated from everything except Server A, such that anyone who has to connect to server B goes through the VPN on Server A. It's not perfect since we have no access to do this on the router side, but it's what we've got. We've set up VirtualBox and OpenVPN Access Server on Server A. It has one network interface set to 'NAT' mode, such that OpenVPN gets its IP at 10.0.2.x, and to connect to the OpenVPN interface, I go to the local IP for the Virtualbox network adapter, 192.168.56.x, which works as I configured the appropriate ports using VBoxManage. My question is, do I need to be using Bridged Networking and give the VPN server its own IP, or is there some way to tell the server (either Windows or the Virtualbox OpenVPN) that 'any public connection on the real external IP on port X should be directed to this internal LAN address of 192.168.1.x on port Y'? OpenVPN itself doesn't seem to be aware of the server's real external IP unless we put it in Bridged networking mode; is that necessary or advisable? We're without RRAS since this is Web edition, but I feel like what we're going for is pretty simple. Thanks! Aq

    Read the article

  • For enabling SSL for a single domain on a server with muliple vhosts, will this configuration work?

    - by user1322092
    I just purchased an SSL certificate to secure/enable only ONE domain on a server with multiple vhosts. I plan on configuring as shown below (non SNI). In addition, I still want to access phpMyAdmin, securely, via my server's IP address. Will the below configuration work? I have only one shot to get this working in production. Are there any redundant settings? ---apache ssl.conf file--- Listen 443 SSLCertificateFile /home/web/certs/domain1.public.crt SSLCertificateKeyFile /home/web/certs/domain1.private.key SSLCertificateChainFile /home/web/certs/domain1.intermediate.crt ---apache httpd.conf file---- ... DocumentRoot "/var/www/html" #currently exists ... NameVirtualHost *:443 #new - is this really needed if "Listen 443" is in ssl.conf??? ... #below vhost currently exists, the domain I wish t enable SSL) <VirtualHost *:80> ServerAdmin [email protected] ServerName domain1.com ServerAlias 173.XXX.XXX.XXX DocumentRoot /home/web/public_html/domain1.com/public </VirtualHost> #below vhost currently exists. <VirtualHost *:80> ServerName domain2.com ServerAlias www.domain2.com DocumentRoot /home/web/public_html/domain2.com/public </VirtualHost> #new -I plan on adding this vhost block to enable ssl for domain1.com! <VirtualHost *:443> ServerAdmin [email protected] ServerName www.domain1.com ServerAlias 173.203.127.20 SSLEngine on SSLProtocol all SSLCertificateFile /home/web/certs/domain1.public.crt SSLCertificateKeyFile /home/web/certs/domain1.private.key SSLCACertificateFile /home/web/certs/domain1.intermediate.crt DocumentRoot /home/web/public_html/domain1.com/public </VirtualHost> As previously mentioned, I want to be able to access phpmyadmin via "https://173.XXX.XXX.XXX/hiddenfolder/phpmyadmin" which is stored under "var/www/html/hiddenfolder"

    Read the article

  • How do you change a Cisco ASA 5510 management interface?

    - by Sam Sanders
    I want to add a redundant interface to my Cisco ASA 5510. The management interface is currently using Ethernet0/1 (10.1.25.254/24) one of the interface I want to use for the redundant interfaces. So I wanted to setup Management0/0 as the new management interface. The other interface I want to use is Ethernet0/2 (10.1.0.254/24) for the redundant interface. The Ethernet0/3 (10.1.251.5/24) interface is not going to be part of the redundant interface. I gave the Management0/0 an IP address of 10.1.254.5, and was able to connect a win7 box to Management0/0 and use 10.1.254.5 as a gateway; and ping another address on the (10.1.251.0/24) network, but I can't ping the interface (10.1.254.5) itself. I also can't use ASDM/SSH to log onto the ASA at 10.1.254.5. I setup rules in Configuration Device Management Management Access ASDM/HTTPS/Telnet/SSH. That look like the original rules for the Ethernet0/1 interface. The last thing I can think to try would be to change the Configuration Device Management Management Access Management Interface. I'm a bit nervous about changing it, the description of it is a bit vague. What it's going to do if I change it? What is the correct way to change a management interface?

    Read the article

  • Troubleshoot IIS 6 and Intermittent SQL Server Connectivity Loss

    - by jpsnow
    I am not a System Administrator, but I am temporarily trying to fill that role. I have 2 Windows 2003 Servers. 1 server has my Microsoft SQL Database (SQL Server 2005) and the other is the web server with IIS6. Intermittently (once every month or 2) the web server loses connectivity to the database server for a period of roughly 10 minutes. It normally regains connectivity without intervention. The website is set up to send me an email when there is a problem connecting to the database. Within the email, I get the error code: [Microsoft][ODBC SQL Server Driver][TCP/IP Sockets]SQL Server does not exist or access denied. I am able to access both servers during the outage. I can go to my SQL server and open up SQL server without any issues. I have looked in the EventViewer and do not see anything. The last time that I had this issue, I stopped IIS6 and started it. After doing this, the issue was resolved. This leads me to believe that it is an issue with IIS and not SQL server. How can I begin troubleshooting?

    Read the article

  • Custom Extensions on Managed Chromebooks

    - by user417669
    I am a developer looking for the best way to set up different schools with their own custom, private extensions (ie School A should be the only one with access to Extension A). Theoretically, I am aware that there are a few ways to get a custom, private extension pushed out on a domain: Host the .crx on a server and click "Specify a Custom App" in the management console. Create a Domain App by uploading a zip to the Chrome Web Store Upload the extension from my developer account to the Chrome Web Store and publish to a single "trusted tester," or make it unlisted Option (1), hosting the .crx, has not been working. I am not sure why, but the extension is simply not pushing out. I link directly to the crx file, which has the right ID and MIME type, still, no dice. If anyone has any tips or suggestions for getting this to work, I would love to hear them! Option (2), having the school create a domain app, seems a bit inefficient because it requires all schools to upload their own zip. So essentially I would have to email a zip file to the school, and have them publish it. All updates to the extension will also require a similar process, so this doesn't seem ideal. I doubt that option (3) would work. If I published to the admin as a "trusted tester", I don't think that the other people in the domain would be able to access it. If it is unlisted, I do not know how an admin could find it in the Chrome Web Store dialog. Also, I would rather avoid security through obscurity. Has anyone had success with hosting the extension and using the Specify a Custom App feature? Any other suggestions for getting a Custom Extension pushed out by the management console? Thanks so much!

    Read the article

  • ProCurve ACL to prevent a subnet from leaving the switch

    - by kce
    I have a single HP ProCurve 2610 in a remote location that is connected in with the rest of the network via SHDSL. There are two Layer-3 networks on this segment. ACLs are setup to deny one subnet (192.0.2.0/24) from ever being able to leave the switch by virtue of being applied to port attached to the upstream connection. The other subnet should be permitted to freely leave the switch. Both subnets are on the same VLAN. Unfortunately SFlow very clearly show broadcast traffic from 192.0.2.0/24 on the upstream connection. ProCurve ACLs are not my strong suit but I feel like I'm missing something very simple here. ip access-list extended "Filter for Camera Network" deny ip 192.0.2.0 0.0.0.255 0.0.0.0 255.255.255.255 log permit ip 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 exit interface 24 name "DSL - UPLINK" access-group "Filter for Camera Network" in exit Unless I am mistaken traffic from 192.0.2.0/24 should be dropped as it crosses the uplink port (int 24) whereas all other traffic will be permited by the following default allow rule. What exactly am I missing here? EDIT: Firstly, why do you have two subnets contained in the same VLAN? Because that's how it was configured by a previous administrator and while it makes conceptual sense that a single subnet is "mapped" to a single VLAN there's no technical constraint that I am aware of that makes this have to be the case. Instead of filtering inbound traffic on your uplink, you should be filtering outbound traffic. The HP2600 series can only filter inbound traffic on interfaces. Should I change my filter to deny any to 192.0.2.0/24?

    Read the article

  • Can I use a Drobo FS over the internet? -Or alternative?

    - by SeniorShizzle
    I have a lot of files. A HUGE Aperture library, lots of design work from Photoshop, and a rather large iTunes library too. I really want to get a Drobo FS, the networked one, to store all of my stuff on, so that I can get to it from my MacBook Air (which obviously with its minuscule 64GB drive can't hold my Aperture library by itself) and my iMac which is my main powerhouse. The dealbreaker for me is that I NEED to be able to access my Aperture library, and especially my iTunes library from across the internet. I understand that it will probably be slow and everything, but there's nothing else I can do besides hauling around a huge hard drive with me. So, is there any way I can somehow share my Drobo across the internet, on a VPN or something? My other alternative is to upload everything to my web host, FatCow, which offers unlimited disk space (something I hope to make them regret) and then access it using Expandrive. My only thoughts with this are that with the Drobo, any work that I do locally will be much snappier than if I have to work everything off the cloud. Any suggestions about alternatives would also be welcome.

    Read the article

  • master-slave datastore replication, automatic failover, and wackamole

    - by z8000
    I have 2 dedicated servers provisioned for my next project's datastores. The datastores are configured for master-slave replication. There's no inherent automatic failover but I of course want this. That is, I'd love for access to the master datastore to always just work without having to configure a client library to detect when a master is down and failover to the slave. I've seen Wackamole which is based on the Spread Toolkit. You provide Wackamole with a set of IPs and a bunch of nodes, and regardless of the up/down state of any of the nodes, those IPs will stay available/up. Wackamole detects when a node goes down and ARPs the IP(s) that were up on the now-down node. It's pretty neat actually. So, my thought was to use Wackamole to keep the 2 virtual private IPs available/up. Clients would then just always use the same private IP to access the master datastore and the same but distinct IP for the slave datastore, even if those IPs were hosted on the same node. My datastore servers are accessed over a private network. I am unsure if this messes with Wackamole though. Is this lunacy? How do you generally handle automatic failover of private services like a datastore. FWIW, it shouldn't matter but the datastore is Redis. I don't want to hear "use mySQL" please :) Thanks.

    Read the article

  • Allignment of ext3 partition on LVM RAID volume group

    - by John P
    I'm trying to add a partition on a LVM that resides on a RAID6 volume group and fdisk is complaining about the partition not residing on a physical sector boundry. My question is, how do you calculate the correct starting sector for a partition on a LVM? This partition will be formated ext3. Would it be better to just format the LVM directly instead of creating a new partition? Disk /dev/dedvol/backup: 2199.0 GB, 2199023255552 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 1048576 bytes / 8388608 bytes Disk identifier: 0x4e428f49 Device Boot Start End Blocks Id System /dev/dedvol/backup1 63 267349 2146982827+ 83 Linux Partition 1 does not start on physical sector boundary. lvdisplay /dev/dedvol/backup --- Logical volume --- LV Name /dev/dedvol/backup VG Name dedvol LV UUID OV2n5j-7LHb-exJL-t8dI-dU8A-2vxf-uIicCt LV Write Access read/write LV Status available # open 0 LV Size 2.00 TiB Current LE 524288 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 32768 Block device 253:1 vgdisplay dedvol --- Volume group --- VG Name dedvol System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 14.55 TiB PE Size 4.00 MiB Total PE 3815448 Alloc PE / Size 3670016 / 14.00 TiB Free PE / Size 145432 / 568.09 GiB VG UUID 8fBcOk-aXGx-P3Qy-VVpJ-0zK1-fQgy-Cb691J

    Read the article

  • IIS 6 ASP.NET default handler-mappings and virtual directories

    - by Mark Lauter
    I'm having a problem with setting a default mapping in IIS 6. I want to secure *.HTML files with ASP.NET forms authentication. The problem seems to have something to do with using virtual directories to hold the html files. Here's how it's setup: sample directory tree c:/inetpub/ (nothing in here) d:/web_files/my_web_apps d:/web_files/my_web_apps/app1/ d:/web_files/my_web_apps/app2/ d:/web_files/my_web_apps/html_files/ app1 and app2 both access the same html_files directory, so html_files is set as a virtual directory in the web apps in IIS... sample web directory tree //app1/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) //app2/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) If I put a file called test.html in the root of //app1/ and then add the default mapping to the asp.net dll and setup my security on the root folder with deny="?", then accessing test.html works exactly as expected. If I'm not authenticated, it takes me to the login.aspx page, and if I am authenticated then it displays test.html. If I put the test.html file in the html_files directory I get a totally different behavior. Now the login.aspx page loads and I stuck some code in to check if I was still authenticated: <p>autheticated: <%=User.Identity.IsAuthenticated%></p> I figured it would say false because why else would it bother to load the login page? Nope, it says true - so it knows i'm authenticated, but it won't give me access to the test.html file. I've spent several hours on this and haven't been able to solve it. I'm going to spend some more time on google to see if I've missed something. Fingers crossed.

    Read the article

  • Intranet Setup for Small business any resources?

    - by Rogue
    Want to setup an intranet for a small business setup. Current Setup 28 computers running Windows ( few older pc's run Windows Xp but most run Windows 7) Spare Dell Pentium 3 which can run as a server. 6 switches spare NIC's and lots of lan cable available for networking. 3 Independent Internet connections Currently we have 3 independent networks which share internet connections, each network uses a different internet connection. Current network is setup solely to share the internet connection. What I need to achieve in this intranet Setup one common network. Instant file transfer via local network (maybe setup a file server?) Local text and voice messenger software Bridge the 3 internet connections and route all the internet connections from the main server Ability to allow or deny internet access to any computer on the network. Remote access from the main server to the client pc's on the network to debug software issues What operating system should I use on the main server? Do I need a hardware firewall? Any setup guides / resources or how-to's on how I can achieve the above requirements.

    Read the article

  • SVN, Samba and Symbolic Links. How to get them all to play together?

    - by Camsoft
    I've got a website project under version control that relies on files from an unversioned directory on the same server via Symbolic Links. I'm currently storing the symbolic links in the repository. The idea is that if someone checks out a working copy on to the same server they can edit and test the working copy of the project before committing it back to the repository. When they checkout their working copy it successfully sets up the symlinks so that the entire site works when testing. The users that work on the project are Windows users, so I've set a samba shares on the server and then mapped them to network drives in Windows. People can edit their working copies directly on the server via network shares and then test them in the web browser before committing their changes back to the repository via TortoiseSVN. The Problem The problem I have is that Samba resolves the symlinks as expected but when a user tries to commit their changes back to the repository, TortoiseSVN thinks the linked files are part of the project and tries to commit the target files to the repository and not the symlinks themselves. I tried turning off symlink support in samba which means that the linked files cannot be resolved as I don't really want people to have access to the linked files nor do I want to import the linked files in the repository. The problem with this is that I get Can't stat '\webserver\projects\working\project\symlinked_file.php'. Access is denied Apart from the symlink problem everything else works 100% perfectly. Users can either checkout website projects to their machine and work on them (but can't test) or checkout them out to their space on the dev web server and work on them and fully test. So I don't want to change the workflow process, I just need a solution to the symbolic link issue. Many thanks. Originally posted on StackOverflow: http://stackoverflow.com/questions/2400917/svn-samba-and-symbolic-links-how-to-get-them-all-to-play-together

    Read the article

  • Adding users to Sharepoint when they are not in the same domain

    - by jim-work
    Bear with me as I explain this, I'm working my way through Sharepoint access as I go, but I'll clarify my question as I go along. The Problem We have about 10,000 users who need access to our Sharepoint 2005 based reporting. Because our organization is migrating from one domain to another, we need to add each user twice, once for each domain. For the current domain, this is no problem, we've got a powershell script that I tweaked to add all the users in a given CSV file, this takes about 5 minutes to run. The big problem we're having is with users who are NOT in our currently active domain. Because the sharepoint server cannot authenticate the new users, we can't add them directly. What we're doing is creating a temp user, then using STSADM.EXE to migrate that test user to the proper domain/user_name for each of our 10,000 users. The creation and migration takes about 5 seconds per user, or well over 12 hours to run. The Question Has anyone encountered this before? Is there a way to add users without requiring AD authentication? Why is STSADM.EXE running so slow? Thanks a lot for any advice or direction anyone can give me.

    Read the article

  • apache on Cent OS opening default page on https

    - by Asghar
    I am new to apache and SSL and configuration, i got verysign certificte to secure my site. i have public, private and ca_intermediate cert files. i have configured ssl.conf as below VirtualHost _default_:443> DocumentRoot /var/www/mydomain.com/web/ ServerName mydomain.com:443 ServerAlias www.mydomain.com # Use separate log files for the SSL virtual host; note that LogLevel # is not inherited from httpd.conf. ErrorLog logs/ssl_error_log TransferLog logs/ssl_access_log LogLevel warn # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on problem is that when i access www.mydoamin.com with "HTTP" it works fine, but when i access using "HTTPS" it just opens apache default page. but with green "HTTPS" means my certificates are installed correctly. How can i get rid of this situtaion. Thanks EDIT Output of apachectl -S -bash-3.2# apachectl -S [Mon Aug 27 10:20:19 2012] [warn] NameVirtualHost 82.56.29.189:80 has no VirtualHosts [Mon Aug 27 10:20:19 2012] [warn] NameVirtualHost 82.56.29.189:443 has no VirtualHosts VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: _default_:8081 localhost.localdomain (/etc/httpd/conf/sites-enabled/000-apps.vhost:10) *:8080 is a NameVirtualHost default server localhost.localdomain (/etc/httpd/conf/sites-enabled/000-ispconfig.vhost:10) port 8080 namevhost localhost.localdomain (/etc/httpd/conf/sites-enabled/000-ispconfig.vhost:10) *:443 is a NameVirtualHost default server mydomain.com (/etc/httpd/conf.d/ssl.conf:81) port 443 namevhost mydomain.com (/etc/httpd/conf.d/ssl.conf:81) *:80 is a NameVirtualHost default server app.mydomain.com (/etc/httpd/conf/sites-enabled/100-app.mydomain.com.vhost:7) port 80 namevhost app.mydomain.com (/etc/httpd/conf/sites-enabled/100-app.mydomain.com.vhost:7) port 80 namevhost mydomain.com (/etc/httpd/conf/sites-enabled/100-mydomain.com.vhost:7) Syntax OK

    Read the article

  • How can I report a website that uses the webmail APIs to send spam?

    - by Igoru
    I've signed up for a cool job website that, unfortunately, asks you if you want to "invite your friends", and if you say so, you can give them access to your Gmail contacts to send the invite. However, contrary to what everyone would be expecting, they don't give you a list of who you want to invite; instead, they simply directly send spam to your entire contact list, like old-fashioned Outlook viruses. When you complain about this with them, they simply say "we will check the application and see if there is anything that might be confusing for the users". For me and some other friends (that felt for the same prank), this is a clear break on web best practices and a big disrespect on the users' trust. Thus, I would like to know what can we do to stop the website of using Gmail/Yahoo/Outlook APIs to send spam this way. P.S.: I wonder what would happen if I've given this website the access to post in my Facebook timeline as well. I've got a couple of calls from relatives asking about the email and I wonder how many unrelated people got this spam, like HR addresses from my past and whatnot.

    Read the article

< Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >