Search Results

Search found 38034 results on 1522 pages for 'possible'.

Page 320/1522 | < Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >

  • Generating new SID for Windows 7 cloned partition in Linux?

    - by Jack
    So I've read that the proper way to clone a Windows 7 partition is to run a Sysprep after the clone is complete. For MANY reasons, this is not possible the way we are cloning these drives (long story short, the drive should be fully up and running after we clone it, with all the settings already there and requiring no user intervention; and no, not even an answer file would work because the way we customize all the Win7 settings is complex and we do not want the user touching the settings). I understand Microsoft will not support Windows 7 clones if it is not sysprepped and that is fine for us. Acronis recovery tools get around this by ticking an option called "Create new NT signature", which resets the SID and GUID on any restore. Symantec has a tool called Ghostwalker which does the same thing. However, we are looking for a way to do this in Linux because we want to use open source tools to do the imaging (fsarchiver, partclone, etc. basically the same tools Clonezilla uses internally to clone NTFS partitions). The question is, if we clone using these tools in Linux, how would we generate a new SID thereafter (without the use of sysprep)? Is there any way to do it within a Linux environment? The whole image process is automated so if it is a simple command that I can just throw in my shell script, that would be even better. Of course, it would be nice to know if this is even possible. Any ideas? EDIT: Forgot to mention that the target machines we are restoring the image on are EXACTLY the same.

    Read the article

  • Configure one IIS site to handle two separate SSL certificates using external Load Balancing or SSL Acceleration Servers

    - by bmccleary
    I have one web application on our server that needs to be referenced by two different domain names, both of which have their own SSL certificates. The application is exactly the same for both domains, but we have to keep the two domain names for legal reasons. The problem is that, since both domains need to have their own SSL certificate, that inside of our IIS 7.5 configuration we have to have two separate IIS applications (both pointing to the same physical location) with their own unique IP address and SSL certificate installed. Now, I know that, due to the nature of SSL communications, that this is by design and that you can't assign more than one SSL certificate per IP address and domain name. My question is… is there any way around this limitation and keep one web application in IIS and have it service two SSL certificates based on host name? I know that with the basic IIS configuration that this is not possible, but I was thinking that with some sort of combination of external load balancing and/or SSL acceleration servers/services that we could have these servers process the SSL request and leave IIS clean to have one single application. I am not familiar at all with these technologies, hence the reason I am asking if it is theoretically possible. If not, does anyone else know how to achieve this?

    Read the article

  • Best way to integrate applications to windows 7 install.wim image

    - by cyph3r
    I have right now an unmodified .iso of a windows 7 32bit and 64bit installation disk. And I need to integrate to that some applications (office, adobe reader etc) and windows updates so that when windows are installed the above applications/updates are already installed and working. Requirements: My output has to be a install.wim image containing the new/improved windows installation files because the deployment is done via a pxe server and a custom windowsPE enviroment. The procedure to create the install.wim has to be as automatic as possible. I can't create it manually every time I want to incorporate a new windows or application update to the image. The image will be installed on 100+ computers so it needs to be 'generic'. I've never done something like this before but from what I searched a possible solution to this issue would be: To create a reference installation (preferably on a vm so I can take snapshots) complete with its applications/updates/settings. After the complete setup I take a snapshot of the installation Run C:\Windows\System32\sysprep\sysprep.exe /oobe /generalize /shutdown to sysprep the machine. Boot to a WindowsPE enviroment and capture the .wim image using gimagex. Deploy the .wim and enjoy the rapid installation times. :D Does that sound ok? Would you recommend anything else? Right now the applications are installed after the installation of windows is complete. So the total installation time is quite long. That's why I need a different approach.

    Read the article

  • Directory tree in a Resource without extraction...

    - by Corelgott
    Hi all, i am looking for a way to store a complete directory including sub directories in an application's resource and not have to extract it to use it. Details: We would like to use GeckoFx (Gecko as C# Component) in one of our applications. GeckoFX needs the XUL-Runner and needs to find it's folder structure We have some other data which I would not prefer to extracted to the customer's pc; At least not onto something persistent like a hdd... Getting the complete directory into the resources is not that kind of a big deal. Compress to one file and done. But not writing it to the disk to use it is something else. I have a strong dislike against temp folders and such things. Would anything like a RAM drive be possible? Some part of the RAM beeing mounted? Does something like this even exist as a lib, or would this only be possible by a device driver? Any thoughts on this? Thanks in advance! Corelgott

    Read the article

  • SQL Server 2008 R2 Error 15401 when trying to add a domain user

    - by Alice
    I am trying to add a domain user. I am doing the following. Expand Security Right click on Logins Select New Login... Login name select search Click on location and select entire directory Type username Click checkname The name goes underlined and add some more info Click OK Click OK I then get the following error: I have found http://support.microsoft.com/kb/324321. The Login does exist There is no Duplicate security identifiers Authentication failure I don't think is happening as I can browse AD Case sensitivity should not be the problem as I am doing the checkname and it is correcting it. Not a Local account Name resolution again I can see the AD I have rebooted the server (VM) and the issue is still happening. Any ideas? Edit I have also: Domain member: Digitally encrypt secure channel data (when possible) – Disable this policy Domain member: Digitally sign secure channel data (when possible) – Disable this policy Rebooted server http://talksql.blogspot.com/2009/10/windows-nt-user-or-group-domainuser-not.html Edit 2 I have also: Digitally encrypt or sign secure channel data (always)- Disabled Rebooted Edit 3 Since the question have moved site I no longer haves access to comment etc... I have checked the dns on the server to a machine where it is working. The DNS servers are the same on both...

    Read the article

  • Adding Netem Filter Rules

    - by fontsix
    iam new in programming and using linux. My Question is, is it possible to add Netem Filter Rules later ? I want to create an PHP-Interface for Netem and I don't know how much filters were required. This should be some kind of dynamically. In Example : A user with a static IP starts an Netem Command (Latency) with PHP Interface this means these five command werde executed by php in the first step $classid = 11; $handle = 10; "sudo tc qdisc add dev eth0 handle 1: root htb"; "sudo tc class add dev eth0 parent 1: classid 1:1 htb rate 100Mbps"; "sudo tc class add dev eth0 parent 1:1 classid 1:$classid htb rate 100Mbps"; "sudo tc qdisc add dev eth0 parent 1:$classid handle $handle: netem delay 100ms"; "sudo tc filter add dev eth0 protocol ip parent 1:0 prio 3 u32 match ip dst $dest flowid 1:$classid"; Now, if there would be a second user who wants to use Netem independent of the first user, i only want to execute the last 3 commands, like "sudo tc class add dev eth0 parent 1:1 classid 1:$classid htb rate 100Mbps"; "sudo tc qdisc add dev eth0 parent 1:$classid handle $handle: netem delay 100ms"; "sudo tc filter add dev eth0 protocol ip parent 1:0 prio 3 u32 match ip dst $dest flowid 1:$classid"; There is an Algorithmus for increasing variables $classid and $handle. This should work. Now my Question: Is it possible only to add these 3 commands to add a new class with new qdisc and a new filter rule ? Or how can i realize it ? The Apache Error_log tells me "sh: line 1: flowid: command not found" but i can't find any mistake. I hope you could help Best regards fontsix

    Read the article

  • My home box as my own host?

    - by Majid
    Hi all, I have a 512 kb/s DSL service at home. I do not have a static IP but I can get one if I pay some extra to my ISP. Now, if I get the static IP, can I make my home box act as my internet host? What else do I need? Thanks P.S. I know that if at all possible, the site I make available this way might be slow, that is alright, my question is if it is possible at all. Edit: I need this for very small traffic. I am a php developer and for my projects I am often asked to provide a demo. I currently use a free hosting for this purpose but it is down most of the time and support is non-existent. So I thought to set-up my home computer as my test server. With this please note that: I will only occasionally have 'visitors' and that will be one or possibly two visitors at any time. These demos, are to showcase functionality, so no big images are served and page views will normally generate under 100KB of traffic.

    Read the article

  • FFmpeg overlay two videos, one input with transparency

    - by Gian B.
    I am trying to create a karaoke from a CD+G file (converted to AVI using FFmpeg) and add a video as a background of the lyrics. Here's a screenshot of a the output from CD+G conversion, for simplicity let's call this lyrics.avi http://imgur.com/wUwHUhV Now a have this video.mp4 file that I'd like to put behind this lyrics.avi Here's a sample of what I'm trying to achieve http://imgur.com/8GuWXtQ I'm sure most of you are familiar with karaoke. I haven't used FFmpeg much and I'm not really sure if what I want to achieve is possible with FFmpeg. Is it possible to overlay two videos, and add a transparency to one of the videos? In this case the colour black? How can I set the offset time of the lyrics.avi? Here's the command the I've tried so far: ffmpeg -i video.mp4 -i lyrics.avi -filter_complex "nullsrc=size=854x480 [base]; [0:v] setpts=PTS-STARTPTS, scale=854x480 [upperleft]; [1:v] setpts=PTS-STARTPTS, scale=854x200 [bottomleft]; [base][upperleft] overlay=shortest=1 [tmp1]; [tmp1][bottomleft] overlay=shortest=1:y=280" -c:v libx264 -y karaoke.mp4

    Read the article

  • both ssl and non-ssl on single port

    - by Zulakis
    I would like to make my apache2 webserver serve both http and https on the same port. With the different method i tried it was either not working on http or on https.. How can I do this? Update: If I enable SSL and then visit the with http I get page like this: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Reason: You're speaking plain HTTP to an SSL-enabled server port.<br /> Instead use the HTTPS scheme to access this URL, please.<br /> <blockquote>Hint: <a href="https://server/"><b>https://server/</b></a></blockquote></p> <hr> <address>Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny16 with Suhosin-Patch mod_ssl/2.2.9 OpenSSL/0.9.8g Server at server Port 443</address> </body></html> Because of this, it seems very much possible to have both http and https on the same port. A first step would be to change this default-page so it would present a 301-Moved header. Update2: According to this, it is possible. Now, the question is just how to configure apache to do it.

    Read the article

  • Wiring my internet

    - by u8sand
    I have Verizon internet service and am currently using wifi. My router is in the basement and my desktop computer is 2 floors and on the other side of the house above it... Worst possible positioning but that's just how things worked out. My wireless currently is extremely unstable so I've decide to correct the problem by wiring my computer directly. The problem lies here: when redoing the room next to it (when the wall was open) we went ahead and wired some coaxial cable from our attic to our basement (with plenty of slack on both ends, don't ask me why we didn't go ahead and wire a CAT6 cable). The question is: Can I use the coaxial cable to bring me internet connection? Naturally the router (which needs to stay where it is) takes a coaxial cable input and has Ethernet outputs. So maybe I would have to take a ethernet cable, convert to coaxial-coaxial to my computer, convert back to ethernet. Is this even possible to convert from coaxial to ethernet? Or do I have to attempt to go ahead and fish a cat6 cable through my house. I cannot just split the signal because that would require two routers and two networks (which I don't believe would work with one cable-one ISP correct me if I'm wrong). Thanks

    Read the article

  • Syncing Multiple Google Calendars and with Outlook and Android

    - by Fred Thomas
    Perhaps this is a multipart question, but I deal with a lot of calendars in my life, and want to know if there is some way to sync them all together, and maintain appropriate privacy. So I have a family calendar that my ex and I maintain for kid events, and I have a personal calendar for my own life, and I have an Outlook work calendar, for work. Ideally I'd look at my calendar on my Android phone. Is it possible to sync them all together? Is it possible for there to be one calendar to rule them all on my phone, but have the other calendars blank out spaces that are from other calendars, but only show the blanked out without the details. (I don't want my date with Miss Hottie to appear that way on the family calendar, and I probably don't want my visit to the proctologist to appear in the corporate exchange server.) Are there tools available to do this? Bonus question, can I do the same with my to do lists? Double bouns question -- how can I solve world hunger and help us to all live together in peace? :-)

    Read the article

  • Best video codec to store my own collection

    - by Jack
    Hello! I think this question has already been asked but with different flavours. My problem resised in the fact that my camera (Canon G9) creates video with almost raw codec (I think it's plain old MPEG) so a 10 minutes video is almost 900mb. I would like to convert them in a format that has a good trade-off between space and quality, but I would prefer having the quality as good as the original (of course this is not possible because of lossy compression) just saving as much space is possible with a minimal lose of quality. Which codec should I look for? H264? It seems to be the champion of the moment.. otherwise which other ones could I try? XviD? Which parameters should I use? I mean how many kbits/s is a fair good bitrate to keep high quality? And what about audio codec? video specs are 640x480 at 30fps or 1024x768 at 15fps.. thanks in advance!

    Read the article

  • Can a folder on a NAS be made available as a physical drive in VMWare?

    - by asbjornu
    We are currently in the process of moving from a single web server to two load balanced web servers and are facing some challenges we don't quite know how to fix. One of these is that the current single server hosts applications that write stuff to disk. The applications running on the server expects that when something is written to disk it later will in fact exist, so it's important that this premise is fulfilled with the dual server architecture as well. The dual server setup is a couple of VMWare instances with Windows Server 2008 R2 as the guest operating system. Out of the box, these instances does not share any kind of file system, so just moving the applications over would make them break since one instance would write something to the file system that doesn't exist on the other. Thus we need to share a file system between the two virtual servers. Our host has proposed to create a network share on a SAN and map this share individually on each virtual machine. This doesn't work too well due to NTFS permissions, etc., because the share needs to be accessed by several independent web applications that won't even be in the same application pool. The only solution that kind of works is to hard code an "identity" for each web application into its web.config file, but this means password in clear text which doesn't sit well with me. Since the servers are virtual, I'm thinknig: Wouldn't it be possible to make a NAS area available as a physical disk in the gues operating system somehow? Since VMWare has full control of the virtual hardware, you'd think it would be able to "fake" a local hard drive in the virtual machine that in reality is a folder on a NAS, but so far I haven't found anything that states how and if this is possible. So I have to ask the wonderful Server Fault community: Can a folder on a NAS be made available as a physical drive (typical D:) in both of the virtual machines?

    Read the article

  • How does one skip "Windows did not shut down successfully" in Win7-64?

    - by XenonofArcticus
    Migrating an app from an expensive and unreliable dedicated embedded x86 box running WinXP-embedded to COTS hardware (Dell E6410 laptop) running normal Win7-64. At this time, it's not feasible to deploy using Windows 7 embedded. The problem is, that the system is still sort of "embedded". The power could shut off at virtually any time without prior warning. We've stripped the OS down and removed the battery capability so that it will power down as desired. The app never writes to the disk, so it's not like we're going to corrupt anything terribly. The system is essentially idle after our app is up and running (with the exception of some computation, graphics, and TCP/IP and serial communications) so the OS enters a pretty stable state rather quickly. After a power-loss however, it rightly complains that Windows did not shut down successfully and presents the user with the Windows Error Recovery text screen. If left alone, it does eventually move on booting just fine, but we'd like to skip that step if possible. WinXP-embedded is designed to do this automatically, so I know it's possible. I've looked at the Kernel Switches but I didn't see anything documented for "Skip Windows Error Recovery". I've also read extensively on the startup process: http://homepage.ntlworld.com./jonathan.deboynepollard/FGA/windows-nt-6-boot-process.html I know I can disable the auto chkdsk in the registry, but that's not the same thing either. So, how do I streamline the boot process to not hassle the user about a situation that will be the regular normal situation?

    Read the article

  • How to limit SMTP delivery to hourly batches

    - by Jeremy W
    Moved over from StackOverflow. Sorry if you saw it there first In an effort to keep us from being labeled spammers by major ISPs (in addition to SPF records, privacy policies, CANSPAM compliance and the like) - I wanted to limit the amount of mail we send out an hour. Is this possible in W2K3 SMTP server? I was looking at outbound connection properties in the SMTP virtual server config screens...It's just not that clear if tinkering with those settings are going to do what I want. In a nutshell, I'd love mail being sent by this server to queue up and send for example, 5,000 messages every 10 minutes or so. Mail is being sent via ASP.Net. Also, I wouldn't be sending 1 million a day. Probably 30,000 tops - and doing that only a few times a month. I'm just trying to avoid a tidal wave of 30k going out in 1 minute and setting off every network spam monitoring alarm in North America. I know I could do it with a combination console app / scheduled job. My question was if there was an easier way to accomplish this with the Virtual SMTP Server settings on Win2k3 Is this possible?

    Read the article

  • MySQL consuming all system memory on INSERT ... SELECT

    - by siete
    The mysql daemon is getting killed because Linux is reaching out of memory: Oct 24 07:41:23 <hostname> kernel: [82297.673701] Out of memory: kill process 13816 (mysqld) score 1839626 or a child There is a link with some workaround on this. That only happen when executing a query INSERT ... SELECT with a very huge resulset. MySQLTuner script displays that maximum theorical memory is less than 8GB, but top and munim shows that is getting over all RAM and swap available: [--] Total buffers: 560.0M global + 72.2M per thread (100 max threads) [OK] Maximum possible memory usage: 7.6G (43% of installed RAM) I'm tried to tune some options with not results, there are the relevant ones: skip-locking max_connections = 100 key_buffer_size = 512M max_allowed_packet = 32M table_open_cache = 2000 open_files_limit = 3000 sort_buffer_size = 16M read_buffer_size = 16M read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 4 query_cache_size = 16M query_cache_limit = 2M thread_concurrency = 4 join_buffer_size = 32M tmp_table_size = 32M max_heap_table_size = 32M query_cache_limit = 8M bulk_insert_buffer_size = 64M myisam_max_sort_file_size = 50GB myisam_mmap_size = 10GB And there is a system resume: OS: Linux Debian "Squeeze" 6.0.8 (upgraded yesterday) RAM: 18GB Swap: 18GB MySQL: 5.1.72-2 (official Debain release) At this moment, update or change OS or MySQL version is not possible, there is any option that can help and i missed? Sorry by my english, and thank you in advance! Edit: I'm only using MyISAM tables, and cannot change to InnoDB.

    Read the article

  • HP Pavillion DV6500 recovery disk failure

    - by Scott W
    I recently attempted to re-install Windows Vista on an HP Pavillion DV6500 using the factory recovery DVD's, but encountered a strange problem. When the recovery disk attempted to reformat the hard disk, it failed at 22%. The error message provided was not very informative, just the error code "0x400110020000 1005". A google search turned up some people with a similar problem who asserted that HP has been know to ship corrupted recovery DVDs. The recovery disk did manage to reformat the the recovery partition before failing though, so recovering from the partition is no longer an option. It would be possible to reinstall from an off-the-shelf retail copy of Vista and then pull the drivers from HP's website, but I don't have access to a copy of Vista, and it would really be outrageous to have to purchase a new OS when I have a perfectly valid license already. Thought about biting the bullet and upgrading to Windows 7, but my understanding is that without Vista installed I'd be unable to use the upgrade version, and be forced to purchase the more expensive non-upgrade retail copy (!). Can anyone suggest a possible solution to this Catch-22? I've run out of ideas.

    Read the article

  • How to interrupt software raid resync?

    - by Adam5
    I want to interrupt a running resync operation on a debian squeeze software raid. (This is the regular scheduled compare resync. The raid array is still clean in such a case. Do not confuse this with a rebuild after a disk failed and was replaced.) How to stop this scheduled resync operation while it is running? Another raid array is "resync pending", because they all get checked on the same day (sunday night) one after another. I want a complete stop of this sunday night resyncing. [Edit: sudo kill -9 1010 doesn't stop it, 1010 is the PID of the md2_resync process] I would also like to know how I can control the intervals between resyncs and the remainig time till the next one. [Edit2: What I did now was to make the resync go very slow, so it does not disturb anymore: sudo sysctl -w dev.raid.speed_limit_max=1000 taken from http://www.cyberciti.biz/tips/linux-raid-increase-resync-rebuild-speed.html During the night I will set it back to a high value, so the resync can terminate. This workaround is fine for most situations, nonetheless it would be interesting to know if what I asked is possible. For example it does not seem to be possible to grow an array, while it is resyncing or resyncing "pending"]

    Read the article

  • How to handle sh: fetch: command not found

    - by Tyler Johnson
    Okay, I'm a noobie. I know how to build and compose a website, but I have no idea what I'm doing when it comes to servers and server commands, etc. I've recently had a problem with all of my sites on our servers going down all at once and then I have to go in and reboot the server for them to come up again. At first this was annoying, but now it is becoming agonizing as it now takes 3-4 reboots for the websites to come back up. I contacted support for my hosting, but they are not being very helpful. They just keep telling me what the issue might be and basically telling me that I'm going to have to look into it and figure it out, which really isn't possible since I know nothing. Anyway, here are the things they said were possible reasons: They said I have "strange logs" in my Apache webserver log, error: sh: fetch: command not found. My php.ini memory limit is: 256M which is very high. It should be 32M or 64M. Server is reaching Max Clients, meaning we have more than 150 visitors at a time. (They supposedly "fixed" this, but the sites/server are still going down) I have some Wordpress sites with plugins getting errors like: PHP Warning: pack(): Type H: illegal hex digit G in... PHP Fatal error: Cannot use object of type stdClass as array in... PHP Fatal error: Maximum execution time of 30 seconds exceeded in... PHP Fatal error: Call to undefined function file_exists() in... PHP Parse error: syntax error, unexpected '<' I know that's a lot, but I really am at wits end and have no idea what to do now. If anyone could maybe give me some advice or point me in the right direction I would greatly appreciate it! Thanks! Oh, and here are the specs for my server: RAM: 2048MB CPU Shares: 40 Primary Disk: 50GB Data Transfer: 75GB Port Speed: 5Mbps

    Read the article

  • How to set up multiple SSIDs with bandwidth limiting on a single wireless router?

    - by Rahul Narain
    I have an Asus WL-520GU wireless router connected to a cable modem that I use for wireless internet access in my apartment. I would like to set it up so that it provides two SSIDs: one secured and password-protected for my regular use, and a "guest" SSID that's unsecured but throttled to, say, 10% of the available bandwidth. What is the most straightforward way to do this? I've been looking into DD-WRT and Tomato, both of which support my router. DD-WRT supports setting up multiple SSIDs using the GUI, but I don't know if it's possible to limit the bandwidth of each SSID independently; point #12 in this FAQ thread says it's not possible to limit by day or by MAC address, which is discouraging but not conclusive. Tomato allows bandwidth limits in its QoS settings, going by the screenshot here, but multiple SSID support is still experimental and it doesn't look like it will work with the encryption settings or bandwidth limits in the GUI. I'd like to know a good way to do this which gives me the fewest opportunities for screwing up. I'm no stranger to the command line, if that turns out to be what's necessary, but if so, please explain what the commands are doing because I don't have a good mental model of what needs to happen to set this up.

    Read the article

  • What kind of server attacks should i be aware of nowadays

    - by Saif Bechan
    I am recently running a web server, and there is a lot of information online, but it can all be a little confusing. I recently opened my logwatch logs and saw that i get attacked a lot by all sorts of bots. Now I am interested in a list with things I definitely should be aware of nowadays, and possible ways to prevent them. I have read stories about server crashed by floods, crashed by email, and all sorts of crazy stuff. Thing I already did: I have recently blocked all my ports, except for the http and email ports. I disabled IPv6, this was giving me a lot of named errors I have turned on spam DNS blackhole lists to fight spam - sbl.spamhaus.org; - zen.spamhaus.org; - b.barracudacentral.org; I installed and configured mod_security2 on apache There is no remote access possible to my databases That is all i did so far, further I am not aware of any other threats. I want to know if the following things have to be protects. Can I be flooded by emails. How can i prevent this Can there be a break in or flood of my databses Are there things like http floods or whatever Are there any other things i should know before i go public with my server I also want to know if there is some kind of checklist with must-have security protections. I know the OWASP list for writing good web applications, is there something for configuring a server.

    Read the article

  • compile ntp without ssl

    - by Zulakis
    I need to deploy ntp to a very space-critical pxe-imaging-system. (Yes, each KB matters.) Footprint needs to be as small as possible, so I want to compile ntp without linking openssl. According to the manual this is should be possible: If available, the OpenSSL library from http://www.openssl.org is used to support public key cryptography. The library must be built and installed prior to building NTP. The procedures for doing that are included in the OpenSSL documentation. The library is found during the normal NTP configure phase and the interface routines compiled automatically. Only the libcrypto.a library file and openssl header files are needed. If the library is not available or disabled, this step is not required. I already tried out ./configure --without-openssl however, this didn't help. This is my ldd output: ldd ntpd/ntpd linux-gate.so.1 => (0xb7706000) libm.so.6 => /lib/i686/cmov/libm.so.6 (0xb76d5000) libcrypto.so.0.9.8 => /usr/lib/i686/cmov/libcrypto.so.0.9.8 (0xb7582000) librt.so.1 => /lib/i686/cmov/librt.so.1 (0xb7578000) libc.so.6 => /lib/i686/cmov/libc.so.6 (0xb741d000) /lib/ld-linux.so.2 (0xb7707000) libdl.so.2 => /lib/i686/cmov/libdl.so.2 (0xb7419000) libz.so.1 => /usr/lib/libz.so.1 (0xb7404000) libpthread.so.0 => /lib/i686/cmov/libpthread.so.0 (0xb73eb000) The system I am compiling on is 32-bit debian lenny using openssl 0.9.8g-15+lenny16. What is the correct configure option to compile ntp without openssl?

    Read the article

  • Apache Probes -- what are they after?

    - by Chris_K
    The past few weeks I've been seeing more and more of these probes each day. I'd like to figure out what vulnerability they're looking for but haven't been able to turn anything up with a web search. Here's a sample of what I get in my morning Logwatch emails: A total of XX possible successful probes were detected (the following URLs contain strings that match one or more of a listing of strings that indicate a possible exploit): /MyBlog/?option=com_myblog&Itemid=12&task=../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 200 /index2.php?option=com_myblog&item=12&task=../../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 200 /?option=com_myblog&Itemid=12&task=../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 301 /index2.php?option=com_myblog&item=12&task=../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 200 //index2.php?option=com_myblog&Itemid=1&task=../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 200 This is coming from a current CentOS 5.4 / Apache 2 box with all updates. I've manually tried entering a few in to see what they get, but those all appear to just return the site's home page. This server is just hosting a few Joomla! sites... but this doesn't seem to be targeting Joomla (as far as I can tell). Anyone know what they're probing for? I just want to make sure whatever it is I've got it covered (or not installed). The escalation of these entries has me a bit concerned.

    Read the article

  • Why do HTTP loopback connections not work on my subdomains?

    - by memeLab
    I have a shared hosting account at Jumba running Linux kernel 2.6.9-103.ELsmp (don't know if that helps) with cpanel 1.0 (RC1). I am using the WordPress plugin Backup Buddy, which requires HTTP loopback connections to monitor / complete backups. This works fine on memelab.com.au, but doesn't work at any subdomain (e.g.: staging.memelab.com.au). Is it possible to setup an A record or some such to remedy this? I'm aware of a workaround, (setting WP_ALTERNATE_CRON) but I find this unsatisfactory due to the messy URLs. BackupBuddy:_Frequent_Support_Issues#HTTP_Loopback_Connections_Disabled Here is the reply from my host: …as main domain have it's own separate DNS entry it have localhost entry which helps for looback connections where as subdomains don't have separate DNS zone, so it is not possible to create looback connections for it. I have cpanel access to the 'advanced zone editor' - is there anything tricky I can do there? maybe 127.0.0.2? (I remember reading that there were at least 8 available local IPs available on (some) Linuxes.) All the A records point to the server IP, with the exception of localhost.memelab.com.au which points to 127.0.0.1. I've just tried entering a new A record: localhost.itours.memelab.com.au pointing to 127.0.0.2. I still get the warning in Backup Buddy that loopback is not active, and Cpanel won't let me enter 127.0.0.1 (guess it doesn't work like that!) nslookup itours.memelab.com.au Server: 203.88.112.33 Address: 203.88.112.33#53 Non-authoritative answer: Name: itours.memelab.com.au Address: 117.55.224.177

    Read the article

  • Combining Two .htaccess files .. Wordpress and OSCommerce SEO URL's

    - by William Langford
    I Googled over and over and got no where so I figured I would give this a try.. I have OSCommerce in my httpdocs directory. Then I have /wordpress but changed the blog location to /blog.php with some mods. Works great. Now to add SEO URL's from Wordpress to my OSC htaccess OSC htaccess; Options +FollowSymLinks RewriteEngine On RewriteBase / RewriteRule ^(.)-p-(.).html$ product_info.php?products_id=$2&%{QUERY_STRING} RewriteRule ^(.)-c-(.).html$ index.php?cPath=$2&%{QUERY_STRING} RewriteRule ^(.*)-m-([0-9]+).html$ index.php?manufacturers_id=$2&%{QUERY_STRING} RewriteRule ^(.*)-pi-([0-9]+).html$ popup_image.php?pID=$2&%{QUERY_STRING} RewriteRule ^(.*)-t-([0-9]+).html$ articles.php?tPath=$2&%{QUERY_STRING} RewriteRule ^(.*)-a-([0-9]+).html$ article_info.php?articles_id=$2&%{QUERY_STRING} RewriteRule ^(.*)-pr-([0-9]+).html$ product_reviews.php?products_id=$2&%{QUERY_STRING} RewriteRule ^(.*)-pri-([0-9]+).html$ product_reviews_info.php?products_id=$2&%{QUERY_STRING} RewriteRule ^(.*)-i-([0-9]+).html$ information.php?info_id=$2&%{QUERY_STRING} Wordpress htaccess RewriteBase /blog.php/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /blog.php/index.php [L] Is this even possible with two RewriteBase files? I looked at a way to do it with directory defines but didn't think it was possible as blog.php isn't a directory. Thanks!

    Read the article

< Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >