Search Results

Search found 2844 results on 114 pages for 'daniel martin'.

Page 41/114 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Apache: Assign SSL server / client certs to directories

    - by Daniel Amaya
    I have multiple directories on my system, e.g., /var/www/dir1 /var/www/dir2 /var/www/dir3 And what I'd like to do is to generate a server/client SSL certificate for each directory, and then set up each directory such that the client cert must match the server cert in order to access said directory. Now, if someone has the client cert for /var/www/dir2 and they try to access /var/www/dir1, they will be unable to do so since those directories use different certs. Each of these directories is hosted on the same domain (i.e., domain.com/dir1, domain.com/dir2). Now, the problem I am having is that I am not exactly sure how to accomplish this in Apache. (Also, I don't really care for domain.com to require SSL, but I do want the directories to require it.)

    Read the article

  • List/remove files, with filenames containing string that's "more than a month ago"?

    - by Martin Tóth
    I store some data in files which follow this naming convention: /interesting/data/filename-YYYY-MM-DD-HH-MM How do I look for the ones with date in file name < now - 1 month and delete them? Files may have changed since they were created, so searching according to last modification date is not good. What I'm doing now, is filter-ing them in python: prefix = '/interesting/data/filename-' import commands names = commands.getoutput('ls {0}*'.format(prefix)).splitlines() from datetime import datetime, timedelta all_files = map(lambda name: { 'name': name, 'date': datetime.strptime(name, '{0}%Y-%m-%d-%H-%M'.format(prefix)) }, names) month = datetime.now() - timedelta(days = 30) to_delete = filter(lambda item: item['date'] < month, all_files) import os map(os.remove, to_delete) Is there a (oneliner) bash solution for this?

    Read the article

  • Increase the compression performance of VPN

    - by Martin
    I am currently switching from a system with HPN-SSH tunnels and enabled compression to something VPN based. I have tried tinc and n2n so far, hamachi requires a library I do not have. In my primitive benchmarks I am not satisfied with the achievable bandwidth compared to the SSH tunnels. In tinc the low LZO setting performed best, but compression is only available in UDP mode. Ideally I would like to have a TCP-based VPN with a multi-threaded compression. Can you suggest me some ideas how to increase the performance? Would it be possible to somehow put a compression filter in front of the tun interface? Or are there any VPN implementations that might be better suited for my needs (fast compression, TCP-based, switch mode, does not have to be super-secure)? I would consider tunnelling Ethernet over SSH, but according to some articles it is not advisable.

    Read the article

  • Ethernet 802.1x client -> WiFi AP on a Raspberry Pi?

    - by Martin Janiczek
    I have an Ethernet connection that requires 802.1x authentication (TTLS, MSCHAPv2, name+password). My goal is to connect that to something that would then act as an WiFi AP, so I can use the connection on more devices (iPhone, notebook, etc.) Would it be possible/good idea to use Raspberry Pi for this purpose? Or are there better-suited devices to do this? EDIT: found some alternatives but because of low rep can't post more than two links... OpenWRT + wpa_supplicant guide Carambola - works with OpenWRT (but probably not standalone?) Hornet-UB - works with OpenWRT Asus RT-N10+ + OpenWRT how-to

    Read the article

  • sendmail - DSN: Name Server host not found

    - by Daniel Mitchell
    I've recently setup a new backup server and have configured sendmail with a smart_relay_host Except every email from the command line doesn't go anywhere. From mail.log: Oct 3 14:32:52 **back01 sm-mta[16570]: p93DWqtC016568: to=<[email protected], ctladdr= (0/0), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=120762, relay=10.2.30.60, dsn=5.1.2, stat=Host unknown (Name server: 10.2.30.60: host not found) Oct 3 14:32:52 ***back01 sm-mta[16570]: p93DWqtC016568: p93DWqtC016570: DSN: Host unknown (Name server: 10.2.30.60: host not found) DNS is working correctly on this box. I can do forward and reverse lookups. I can also telnet to the mail relay and send a message that way. I'm stumped, any suggestions?

    Read the article

  • How can I transfer a logged in user's login data from one server to another?

    - by Martin
    I have one server "A" where users can login. Login is verified by an LDAP server "L". I have a different server "B" were users can log in, too. Login is verified by the same LDAP server as before. Both servers are standard web servers with PHP. My goal is: If a user is logged in to server "A", and if he clicks a link to log in to server "B", the user should automatically be logged in without re-entering username and password. What is a good and secure way to achieve this? I can't submit username and crypted password to server "B". I can't use the PHP session of server "A", because it does not exit on "B". Cookies won't work either. I think that there is a way, but I just can't see it. Any help is very much appreciated.

    Read the article

  • Directly printing to remote CUPS/IPP server on Snow Leopard

    - by Martin v. Löwis
    I need to use Kerberos authentication when printing from my OSX machine, however, the machine itself does not have a service account in active directory, so the KDC will not issue a delegation ticket for the local CUPS installation. I think printing could work if the printing framework would directly print to the network CUPS server (or even to the Windows print server), bypassing the local CUPS. Is it possible to setup printing so that it directly accesses the remote print server? (asking for a service ticket for that server would succeed)

    Read the article

  • Quicktime won't install on Windows 7 Ultimate 64bit

    - by Martin
    Hi! I am trying to install quicktime (actually just need iTunes..but then iTunes needs qt), but it fails. There seems to be a problem with the folder C:\Program Files(x86)\QuickTime\ - Quick Time wants to write the file QTTask.exe, but complains that it does not have permission to do so. Same thing happens with \PropertyPanels\PanelHelperBase.qpa I have tried deleting all Apple programs (in the order suggested in the support forum) and also tried to delete the temp folder. That did not work. I have tried to manually adjust the permissions of the QuickTime folder - no effect. I have run the installation file with admin rights and with different compatibility modes to no effect. I consider myself to be an experienced user - able to solve most problems - but now I am stuck. I need some input / fresh ideas on how to tackle the problem. This is very annoying as I cannot sync iPhone/iPad/iPod while iTunes is not running - due to the stupid (sorry) idea of only have your device linked to one library. Please help. Thanks!

    Read the article

  • How do I change Internet Explorer security settings for all users using Active Directory?

    - by Martín Fixman
    I recently created an Intranet application for my company, but to work properly it must execute an ActiveX control to locally run a program. However, the only way I found for this to work was using Internet Explorer, and setting Intranet security to a personalized "very low" configuration for being able to execute ActiveX scripts without asking. I think there is a way to automatically configure IE's settings for all users from Active Directory, but I can't find it. Any help?

    Read the article

  • Developing high-performance and scalable zend framework website [on hold]

    - by Daniel
    We are going to develop an ads website like http://www.gumtree.com/ (it will not be like this one but just to give you an ideea) and we are having some issues regarding performance and scalability. We are planning on using Zend Framework for this project but this is all that I'm sure off at this point. I don't think a classic approch like Zend Framework (PHP) + MySQL + Memcache + jQuery (and I would throw Doctrine 2 in there to) will fix result in a high-performance application. I was thinking on making this a RESTful application (with Zend Framework) + NGINX (or maybe MongoDB) + Memcache (or eAccelerator -- I understand this will create problems with scalability on multiple servers) + jQuery or maybe throw Backbone.js in there, a CDN for static content, a server for images and a scalable server for the requests and the rest. My questions are: - What do you think about my approch? - What solutions would you recommand for developing an high performance, scalable application expected to have a lot of traffic using PHP(Zend Framework 2)...I would be interested in your approch. I should note that I'm a Zend developer, I'm working with Zend for over 3 years, this is why I'm choosing it.

    Read the article

  • Apache rewrite rule to remove index.php and direct certain areas to https

    - by Stephen Martin
    I have a codeignitor application running on Apache2, I have managed to remove the index.php from the urls with this .htaccess RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .* index.php/$0 [PT,L] now I want to make certain parts of the site redirect to https, I tried this: RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .* index.php/$0 [PT,L] RewriteRule ^/?cpanel/(.*) https://%{SERVER_NAME}/cpanel/$1 [R,L] RewriteRule ^/?login/(.*) https://%{SERVER_NAME}/cpanel/$1 [R,L] But it doesn't work. I have to say when it comes to Apache rewrites im a noob. I can't find any tutorials on how to remove index.php and rewrite/redirect certain parts of the site to https. Any ideas, Thanks.

    Read the article

  • Creating a network link between 2 very close buildings

    - by Daniel Johnson
    I have a charity who have two adjacent medium sized modern detached houses (in the UK): the buildings stand next to each other and are less than 5 metres apart. They have DSL connected to a single computer in one of the buildings. They want to add a network with wireless, and want it to work across both buildings. Being a charity they need to keep costs down. The network would be used for sharing Word documents, e-mail, browsing and skyping. My initial thoughts were to connect the buildings with fibre. So: Option 1 Use fibre between the buildings. Sufficient cable and two TP-LINK MC100CM Fast Ethernet Media Converters. Cost ~£80.00. But there is the extra cost and hassle of running the cable down and up the external walls, lifting and relaying paving, and burying underground. Never having fitted fibre I'm also a little worried about going up the wall and then bending the cable at 90 degrees to go through the wall and into the building. Option 2 Use two TP-Link TL-WA7510N High Powered Outdoor 5Ghz 15dBi Wireless antennas to connect the buildings. There is a clear line of sight at first floor level. Cost ~£100. And much easier to fit than fibre! Is using the TL-WA7510Ns overkill? Is there something more suitable? I had hoped to use some Netgear stuff, e.g. two DGN2200, one in each house and also use them to provide the wireless link between the buildings. However, in bridge mode wireless client association is not available and repeater mode with client association only supports WEP security which isn't strong enough. Is there something similar that would be up to the job? Option 3 Connect the buildings with UTP cable. My concerns here are risk of electric shock due to a difference of potential between the buildings (or are they so close this shouldn't be an issue) and protection from lightning strikes. Is fitting lighting arrestors expensive? And what can be done to ameliorate against the risk of shock? This all falls outside my area of expertise so I would really appreciate some advice.

    Read the article

  • How do I extract files from one tarball to another tarball in one step?

    - by Martin
    I have some fairly large tarball archives, from which I need to extract some files. I will later repack those files to transfer them to another server. Currently that is a two (multi) step process for me: mkdir ttmp tar -vxzf large.tgz -C ttmp/ --strip-components=<INT> <folder-to-be-extracted> or alternatively with wildcards mkdir ttmp tar -vxzf large.tgz -C ttmp/ --strip-components=<INT> \ --wildcards --no-anchored '*pattern*' Then I go ahead and recompress the created folder tar -vczf small.tgz ttmp/* rm -rf ttmp How can I combine these two commands into one? Like this tar -x large.tgz > tar -c small.tgz Just to show, what I already tried: Whenever I search the terms "extract" I will end up here or here or even here. When I use the term "split" I will end up here and that is definitely not what I intend to do. When I use "repack" I end up in strange places.

    Read the article

  • Installing the Apple Root Certificate Authority on CentOS CLI

    - by Daniel Hollands
    I could be barking up the wrong tree here, but I'm looking for help on installing Apple's Root certificate (http://www.apple.com/certificateauthority/) on a CentOS server via the command line - which I need to send messages to their APNS system. The code I'm using for this purpose is a variation on this: https://github.com/jPaolantonio/SimplePush/blob/master/simplepush.php - which works perfectly well on a Windows server, but as soon as we try to use it on a CentOS one, it falls over. We're lead to believe this has something to do with not having the CA installed on our CentOS box - but all efforts to do so have failed. As the CentOS server is headless, we need the ability to do this via the commandline. Can someone help?

    Read the article

  • Configuring sendmail to use one outbound MTA exclusively

    - by Charlie Martin
    I have a sendmail problem, and I'm anything but a sendmail guru -- I could use some help. My problem is that I have a system intended to be more or less an "appliance" -- it's not intended to have an admin. Because of this, it needs to be able to "call home" by sending email. As we have configured it, this works fine -- using sendmail, it finds the appropriate relay by looking up an MX record and everything works fine. Now, however, because of security concerns, we want to limit it to using exactly one relay, so for example relay.corp.example.com. Should the user configure it to use, say, fubar.example.com, the mail sending should fail or be deferred. I thought that by configuring sendmail with a /etc/mail/server.switch file containing hosts files without dns, I'd get that effect. This doesn't work -- instead, if it gets mail addressed to [email protected], it tries to talk directly to example.com, and ignores the configured server. Any ideas?

    Read the article

  • What ports does Management Studio use to connect to SQL Server (2005)

    - by Martin
    I have SQL Server 2005 operating on a remotely hosted server and wish to access it from my own machine using management studio The firewall on the remote server is allready setup to allow all traffic from my external IP - and that works great. However I have a load balancing router that has a second line attached to it - unfortunatly the second line has a dynamic external IP. I need to therefore set up a rule in the router to always send data from Management studio on the first line - but I need to know the port numbers. Can you advise?

    Read the article

  • SSD suddenly full

    - by Daniel
    Today the hard drive of our server was suddenly full. The disk usage always stayed around 50 % in the weeks and months before (old data is regularly expunged from the server). I deleted 10 GB of files in /tmp, which strangely freed 51 GB. Here is what I did: root@***:~# df -h Dateisystem Size Used Avail Use% Eingehängt auf /dev/sda3 139G 137G 0 100% / tmpfs 3,9G 0 3,9G 0% /lib/init/rw udev 3,9G 116K 3,9G 1% /dev tmpfs 3,9G 0 3,9G 0% /dev/shm /dev/sda1 985M 25M 910M 3% /boot root@***:/var# du -hs * 3,3M backups 438M cache 9,4G lib 4,0K local 12K lock 76M log 24K mail 4,0K opt 88K run 184K spool 10G tmp 12K www root@***:/var/tmp# find -type f -print0 | xargs -0 rm root@***:/var/tmp# df -h Dateisystem Size Used Avail Use% Eingehängt auf /dev/sda3 139G 81G 51G 62% / tmpfs 3,9G 0 3,9G 0% /lib/init/rw udev 3,9G 116K 3,9G 1% /dev tmpfs 3,9G 0 3,9G 0% /dev/shm /dev/sda1 985M 25M 910M 3% /boot Any explanation as to why deleting 10 GB in /tmp gave me back 51 GB on the disk? Could this point to an SSD failure? Are there any tools for Debian to test SSD health? I already have checked syslog. The first entry relating to this incidient is a mysql message: 1:22:02 [ERROR] /usr/sbin/mysqld: Disk is full writing... So I have absolutely no idea what caused this.

    Read the article

  • How to charge an iPhone on a Windows PC without Installing iTunes

    - by Martin Hollingsworth
    I want to be able to charge my iPhone on my work computer, which is running Windows Server 2008 SP1 64-Bit. When I plug the iPhone in with the USB cable, it will not charge. Windows attempts to locate a driver for the device but only comes up with a generic Camera device and even if I allow that to be installed, the iPhone still does not charge. I have checked the computer's BIOS settings and did not find anything relating to power on the USB devices. I also tried this on ports at the back of the computer in addition to those on the front. The PC is a Dell Optiplex 780. As far as I can tell USB devices do not charge unless Windows has installed an appropriate driver. Since it is a work computer I do not want to install iTunes which does include a driver. I have a workaround that I will post as an answer for reference.

    Read the article

  • IPTABLE & IP-routed netwok solution for HOST net and VM's subnet

    - by Daniel
    I've got ProxmoxVE2.1 ruled KVM node on Debian and bunch of VM's guests machine. That is how my networking looks like: # network interface settings auto lo iface lo inet loopback # device: eth0 auto eth0 iface eth0 inet static address 175.219.59.209 gateway 175.219.59.193 netmask 255.255.255.224 post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp And I've got two working subnet solution auto vmbr0 iface vmbr0 inet static address 10.10.0.1 netmask 255.255.0.0 bridge_ports none bridge_stp off bridge_fd 0 post-up ip route add 10.10.0.1/24 dev vmbr0 This way I can reach internet, to resolve outside hosts, update and download everything I need but can't reach one guest VM out of any other VM's inside my network. The second solution allows me to communicate between VM's: auto vmbr1 iface vmbr1 inet static address 10.10.0.1 netmask 255.255.255.0 bridge_ports none bridge_stp off bridge_fd 0 post-up echo 1 > /proc/sys/net/ipv4/ip_forward post-up iptables -t nat -A POSTROUTING -s '10.10.0.0/24' -o vmbr1 -j MASQUERADE post-down iptables -t nat -D POSTROUTING -s '10.10.0.0/24' -o vmbr1 -j MASQUERADE I can even NAT internal addresses: -t nat -I PREROUTING -p tcp --dport 789 -j DNAT --to-destination 10.10.0.220:345 My inexperienced mind is ready to double VM's net adapters: one for the first solution and another - for second (with slightly different adresses) but I'm pretty sure that it's a dumb way to resolve the problem and everything can be resolved via iptables/ip route rules that I can't create. I've tried a dozen of "wizard manuals" and "howto's" to mix both solution but without success. Looking for an advice (and good reading links for networking begginers).

    Read the article

  • Is it possible to configure TMG to impersonate a domain user for anonymous requests to a website?

    - by Daniel Root
    I would like to configure Forefront Threat Management Gateway (formerly ISA server) to impersonate a specific domain user for any anonymous request to a particular listener. For example, for any anonymous request to http://www.mycompany.com, I would like to serve up http://myinternal as though MYDOMAIN/GuestAccount were accessing the site. Is this even possible in ISA/TMG? If so, where do I go to configure this?

    Read the article

  • Best technique for reusing a Windows system image across configurations

    - by Martin Wiboe
    We are a small company that provides solutions for ventilation systems. Part of the solution is a "controller" which communicates with the ventilation equipment. These controllers are simply Dell computers that come with our Windows 7 system image on them and sometimes some special hardware. We typically do a batch of 10 controllers at a time. We have been using Norton Ghost to apply the system image, but this process breaks because Dell changes the system configuration often, and our Windows image now does not contain the correct drivers. This is especially a problem when they change the RAID controller. To improve this, I see 2 options: use some kind of virtualization and install a hypervisor on each PC. This would solve the driver problem, but probably cause trouble with our special hardware. use some method of adding the proper drivers to our Windows image in offline mode. I haven't got much experience in either of these approaches. How would you solve our problem?

    Read the article

  • iTunes video black screen until select computer output

    - by Daniel Huckstep
    I don't remember this happening before, but now whenever I play any video in iTunes (podcast video, movie, TV show, iTunes extras stuff but the menus work) it just shows a black screen with the sound playing. If I stop it, select "Computer" in the little output control on the iTunes video control panel that pops up, then play again, it works fine. What the heck? Tried rebooting, updating, with and without external monitor. OSX 10.6.6

    Read the article

  • How to set up a PC which can be booted from Linux AND Windows?

    - by Martin
    Our PC was running Windows XP up to know. It has become incredibly slow and I'm considering switching to Linux (Ubuntu?!) as a fresh OS. However, there are some applications we rarely use which run only on Windows and I also want to have the possibility to easily go back to the old system, if I should find during testing linux, that anything is missing or not available. So the idea is to install Linux on a new (second) hard drive and use the existing Windows XP from a virtual machine (converted by Paragon Drive Backup) in the transition time. We have a lot of data on the PC, tens of GBs of Photos (managed by Picasa), ... My questions: What could be the best way to setup the new hard drive? (Partitions) I assume that I can not access the Linux data from Windows but I could access (read/write) windows drives from Linux? Does anyone know good tutorials for this use case? What other things might I have to consider for transition Windows-Linux?

    Read the article

  • Gnome 3 screensaver (not suspend) resume hook?

    - by Daniel
    Since whenever my laptop screen resumes sleeping (i.e. wakes up from screensaving) some settings are reset, such as keyboard backlight, I'd like to run some scripts each time this happens. I'm using gnome 3 with fedora 17 by the way. When researching this issue I came across pm-utils which allows one to hook anything to events handled by pm-utils but it seems to me monitor sleeping (i.e. screensaving) is not one of them. Looks like pm-utils only handles suspend and hybernate. So is there a way to hook up custom programs to screensaver?

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >