Search Results

Search found 38034 results on 1522 pages for 'possible'.

Page 322/1522 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • nginx redirect proxy

    - by andrew
    I have a web app running on a nginx server on local ip 192.168.0.30:80 I have this in my etc/hosts 127.0.0.1 w.myapp.in If someone accesses my app using a "w" subdomain, it shows a webdav interface, otherwise it runs normally (for example, someone calls http://myapp.in , it goes into the app, and http://w.myapp.in goes into webdav interface - this is done within the app, nginx has nothing to do with it) Because I don't have a dns or anything like that, users must access the app by ip. A problem appears if someone wants to access the webdav interface, because you cannot access the app by a subdomain - unless you write a line in your local hosts file, which is not a solution) A possible solution If it's possible to setup the nginx server so that if someone calls http://192.168.0.30 (on port 80), it goes normally into the app, but if a user tries to access say http://192.168.0.30:81 (another defined port) it redirects internally to w.myapp.in, and the app sees the subdomain Given the app, can this be done? If yes, what should I put in the nginx config file? And if you guys think of a better solution, I'm open to any.

    Read the article

  • Execute encrypted files but don't let anybody read them.

    - by Stebi
    I want to provide a virtual machine image with an installed web application. The user should be able to boot the vm (don't login, just boot) and a webserver should start automatically. The point is I want to hide the (ruby) source code of the web application from everyone as there is no obfuscator for ruby. I thought I could use file system encryption to encrypt the directory with the sourcecode (or even a whole partition). But the webserver user must be able to read it automatically after booting. Nobody is allowed to login as the webserver user (or any other user) so no other can read the contents. My questions are now: Is this possible? Because I give away the whole vm everybody could mount its virtual discs and read them (except the encrypted one). Is it now possible to find the key the webserver user needs to decrypt the files and decrypt them manually? Or is it safe to give such a vm away? The problem is that everything needed to decrypt must be included somewhere in the vm else the webserver cannot start automatically. Maybe I'm completely wrong and you have another tip for me securing the source code.

    Read the article

  • Want to send my neighbors to a certain website via DNS, but don't have a clue how. [closed]

    - by Akku
    My neighbors have an unsecured WIFI router, and over the administration web-UI of the router I could log in as there was no password set. I don't know which of my neighbors these are, and I'd like to configure their router in a way that they come to my website instead of Google and Facebook, where I set up a warning in german. It this page: http://www.abelssoft.de/liebenachbarn/ Basically, I just want to see if and how this is possible - I'm aware that I could just set the WiFi-password and have them call their network provider to reset the thing, but I really want to see if this could work, because it would be a way cooler effect :-). So this router interface doesn't allow custom redirects, only filters. BUT I can set the DNS that is used, so I thought there might be the possibility to set up a custom DNS on a server, set it as the main DNS and redirect from Google to the URL above. Is this possible? If so, please try to detail a way that I have to go though to achive this. Note that I'm not the super-Linux-skilled person, I have a dyndns account and a Windows machine it points to as well as an Apache+Tomcat if that helps. I could also set up virtual machines on the windows server and redirect to those using a different port. Or is there maybe a webservice that provides such custom DNS?

    Read the article

  • Connecting multiple access points

    - by mohsen farahanipoor
    I'm working on a big project. We want to create a wireless network throughout the building with 15 floors. My idea is that we should set up one unified wireless access point at least in each floor...in case of signal attenuate, we use Access point extender/repeater. I selected DWL-6600AP from among D-Link industrial access points. I want to implement a single wireless LAN throughout the building. Is it possible to combines multiple DWL-6600 access points to achieving just a single WLAN? Can a wireless switch controller do this task? Can these Access Points interfere with each other? What is the solution? I read D-Link website's learning materials, but I am still confused. My other question is around the connecting these APs to Wireless Switch Controller - Is it possible to use power line for connecting DWL-6600 to Wireless Switch Controller device? My main goal is that clients with portable devices such as laptops should be easily connected to the network to share & have communication without any more manual configuration as they are already connected to a single network.

    Read the article

  • Log shipping on select tables.

    - by Scott Chamberlain
    I know I am most likely using incorrect terminology so please correct me if I use the wrong terms so I can search better. We have a very large database at a client's site and we would like to have up to date copies of some of the tables sent across the internet to our servers at our office. We would like to only copy a few of the tables because the bandwidth requirement to do log shipping of the entire database (our current solution) is too high. Also replication directly to our servers is out of the question as our servers are not accessible from the internet and management does not want to do replication (more on that later). One possible Idea we had is to do some form of replication on the tables we need to another database on the same server and do log shipping of that second smaller database but management is concerned that the clients have broken replication (it was between two servers on their internal network however) on us in the past and would like to stay away from it if possible. Any recommendations would be greatly appreciated. If using some form of replication is the only solution, I am not against replication, I just need compelling arguments to convince management to do it. This is to be set up on multiple sites that are running either Sql2005 or Sql2008 we will have both versions on our end to restore the data to so that is not a issue. Thank you.

    Read the article

  • Can a Windows Domain play along with a Hosted Exchange service?

    - by benzado
    I'm setting up a computer network for a small (10-20 people) company. They are currently using a Hosted Exchange service they are totally happy with. Other than that, they are starting from scratch (office doesn't even have furniture yet). They will need some kind of file sharing server set up in their office. If I set up a machine as a file server and nothing more, users will have three passwords to deal with: local machine, file server, and email. If I set up a Domain Controller, identities for local machine and file server will be the same. But what about the Hosted Exchange server? Must the users have a separate email password, or is it possible to combine the two? (I realize it might depend on the specific hosting provider, but is it possible?) If not, it seems like I have these options: Deal with it: users have a separate email password. Host Exchange on the local server: more than they want to manage in-house? Purchase a hosted VPS, make it part of the domain, and host Exchange there. (Or can/should a VPS be a domain controller?) I realize I have a lot of questions in there. The main one: is there any reason to use a Hosted Exchange service if I'm setting up other Windows services?

    Read the article

  • Caching/preloading files on Linux into RAM

    - by Andrioid
    I have a rather old server that has 4GB of RAM and it is pretty much serving the same files all day, but it is doing so from the hard drive while 3GBs of RAM are "free". Anyone who has ever tried running a ram-drive can witness that It's awesome in terms of speed. The memory usage of this system is usually never higher than 1GB/4GB so I want to know if there is a way to use that extra memory for something good. Is it possible to tell the filesystem to always serve certain files out of RAM? Are there any other methods I can use to improve file reading capabilities by use of RAM? More specifically, I am not looking for a 'hack' here. I want file system calls to serve the files from RAM without needing to create a ram-drive and copy the files there manually. Or at least a script that does this for me. Possible applications here are: Web servers with static files that get read alot Application servers with large libraries Desktop computers with too much RAM Any ideas? Edit: Found this very informative: The Linux Page Cache and pdflush As Zan pointed out, the memory isn't actually free. What I mean is that it's not being used by applications and I want to control what should be cached in memory.

    Read the article

  • Starting multiple Chrome full screen instances on multiple monitors from (batch) script

    - by Bob Groeneveld
    My goal is to show different web content full screen on multiple monitors automatically after booting from a single computer. The browser I would like to use is Chrome. If Chrome does not support this and Firefox does that would be fine. The OS I would prefer is Windows, if it turns out that Linux is possible that would be fine. On Windows it is possible to set the position of the Chrome browser window (--window-position=) and make Chrome start in full screen mode (--kiosk). Using these options combined you can start Chrome full screen on any of the desktops/screens that you have connected to your computer. I have managed to get this working. However, if I then try to do the same thing a second time to have Chrome full screen on a second screen the second Chrome window will open over the first window, no matter the coordinates I use for the --window-position parameter. I have tried using Chrome profiles and copying the Chrome directory and starting the second chrome.exe. All these things result in the same behaviour.

    Read the article

  • How to better copy&paste big files over RDP?

    - by WebMAOhist
    Recently I was making a few attempts to copy&paste a big (1.2 GB) file to remote computer over RDP. The remote computer is virtual testing machine with MS Windows Server 2008 Datacenter. First I tried to copy&paste before midnight when the transfer speed was limited by client computer ISP to 100 kB/s. So, it required a few hours and I was forced to cancel transfer since remote desktop became too unresponsive and sluggish (slow). So, I re-started it over midnight when my local transfer speed is over 4 GB/s 4MB/s (sorry for typo). So, my impression is that independently on speed (broadband) of copy&paste transfer the remote computer becomes sluggish while copying over RDP. At the same time downloading from internet doesn't make remote host sluggish. AFAIU, it is because clipboard of remote computer and so its memory becomes overloaded by transfer. How can I control (restrict) the usage of clipboard for specific process (pasting of file)? What are the possible way to control it? Update: After reading that slow speed of transfer is caused by encryption used for copy&pasting over RDP and since I believe I am more interested in overall efficiency: both the time, or rapidness, of getting file as well as possibility to work without waiting, I changed the question title from: How to control the usage of remote desktop clipboard usage for pasting a big file? to How to better copy&paste big files over RDP? For example, is it better to copy&paste one huge (zip) archive or unzip it and copy paste a folder with unzipped files? And more exactly I wanted to ask: What are possible ways to improve overall experience: the speed of transfer (i.e. availability of needed file) responsiveness of remote host (making remote coputer available for work before completion of copy&pasting)?

    Read the article

  • How can the route between two private IPs go via public IPs?

    - by Gilles
    I'm trying to understand what this output from traceroute means. I changed the IP addresses for privacy but retained the public/private IP range distinction. traceroute.db -e -n 10.1.1.9 traceroute to (10.1.1.9), 30 hops max, 60 byte packets 1 10.0.0.1 0.596 ms 0.588 ms 0.577 ms 2 10.0.0.2 1.032 ms 1.029 ms 1.084 ms 3 10.0.0.3 3.360 ms 3.355 ms 3.338 ms 4 23.0.0.4 3.974 ms 4.592 ms 4.584 ms 5 23.0.0.5 13.442 ms 13.445 ms 13.434 ms 6 45.0.0.6 13.195 ms 12.924 ms 12.913 ms 7 67.0.0.7 52.088 ms 51.683 ms 52.040 ms 8 10.1.1.8 46.878 ms 44.575 ms 44.815 ms 9 10.1.1.9 45.932 ms 45.603 ms 45.593 ms The first 10.0.* range is inside my organisation. The last 10.1.* range is another site of my organisation. The intermediate addresses belong to various ISPs. I expect that there is some kind of VPN between the two sites, but I don't know much about our network topology. What I don't understand is how the route can go from a private address through public addresses back into private addresses. Searching led me to Public IPs on MPLS Traceroute, which gives a possible explanation: MPLS. Is MPLS the only possible or most likely explanation? Otherwise what does this tell me about our network infrastructure? Bonus question for my edification: in this scenario, who is generating the ICMP TTL exceeded packets and if relevant mangling their source and destination addresses?

    Read the article

  • Preserve embedded album art when converting from .flac to .ogg

    - by Profpatsch
    I want to convert my archived .flac library to .ogg for daily use. Using find ./ -iname '*.flac' -print0 | xargs -0 -n1 oggenc -q6 on the root music folder and then deleting every .flac (having copies of them in archive) seems straight forward, after trying it with one file it worked and all of the tags were transfered, too, except for one: Embedded album art! I always prefer emedded covers over folder images, since I have some albums with varying covers. One possible solution is discussed here, but the script only works if the image is already extracted: Embed album art in OGG through command line in linux One possible solution I thought about was extracting album art from every song (not every song has one, though, and some even 2 or 3!), temporarily saving it and then using the script to include it into the finished .ogg. But then I want to increase the number of processes xargs runs simultaniously to save time, so the temp images need to have a distinct name. Is there a (linux) program that knows how to handle this? Or is there a finished script floating around somewhere? It would be nice if oggenc supported adding embedded coverart and it really is a shame, since these two formats should (in theory) share the same tag format. Edit: 15 days and noone even tries to answer. It’s funny, most of my questions don’t get answered. Too hard? Wrong SE site?

    Read the article

  • Crond offset five minute schedule

    - by sam
    Is it possible to offset a cron script set to run every 5 minutes? I have two scripts, script 1 collects some data from one database and inserts it into another, script 2 pulls out this data and a lot of other data and creates some pretty reports from it. Both scripts need to run every 5 minutes. I want to offset script 2 by one minute so that it can create a report from the new data. E.G. I want script one to run at :00, :05, :10, :15 [...] and script two to run at :01, :06, :11, :16 [...] every hour. The scripts are not dependent on each other, and script 2 must run regardless of whether script one was successful or not. But it would be useful if the reports could have hte latest data. Is this possible with cron? Post; I have thought about using both commands in a shell script so they run immediately after each other but this wouldn't work, sometimes script 1 can get hung up on waiting for external APIs etc. so might take up to 15 mins to run, but script 2 must run every 5 minutes so doing it this way would stop/delay the execution of script 2. If I could set this in Cron it would mean script 2 would run regardless of what script 1 was doing

    Read the article

  • Ubuntu Natty 11.04, Turning the wireless switch off; switches it off permanently!

    - by ZiGi
    i'm using an hp pavilion dv2000 i turned the wifi switch off by mistake, the LED turned orange and the wifi got disconnected. and now when i turn the switch on, it remains orange and the wifi still isn't functional. this happened before; i found a fix that worked searching google. it was done via terminal commands and i didn't have to download anything but i can't find the solution anymore! wlan0 shows up when i use: :~$iwconfig #BLA BLA BLA #... wlan0 IEEE 802.11abg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=off Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off more results: :~$ sudo ifconfig wlan0 up SIOCSIFFLAGS: Operation not possible due to RF-kill :~$ rfkill list all 1: phy0: WirelessLAN Soft blocked: yes Hard blocked: yes :~$ sudo rfkill unblock all :~$ rfkill list all 1: phy0: WirelessLAN Soft blocked: no Hard blocked: yes :~$ sudo ifconfig wlan0 up SIOCSIFFLAGS: Operation not possible due to RF-kill it's still hard blocked! even though the switch is turned on; gives the same result eitherways a direction to a page with a working solution is a much appreciated answer!

    Read the article

  • What is the correct authentication mechanism when there are users inside and outside the domain?

    - by Gary Barrett
    We have a Windows 7 enterprise desktop data entry app for mobile (laptop) users with local SQL Express 2008 R2 Express db that syncs data with an SQL Server 2008 R2 Server db. Authentication is required before syncing the data. The existing group of users are part of the organisation's domain so normal scenario and they connect to the Sql Server directly. But there are plans for a second group of app users who belong to various partner organisations so they are outside our domain and have their own various separate domains/accounts. The aim is to deploy the desktop app to them and they will periodically sync data to our SQL Server. What I am uncertain of: Is it possible to authenticate users from another domain? Can permissions be managed via Active Directory etc? Which authentication protocol should be used in this scenario? Windows, Forms, SQL, etc? The IT people are requesting users of the system be managed via Active Directory. Is it possible to manage the external domain users access via Active Directory?

    Read the article

  • Snapshotting single disk of running Hyper-V VM

    - by modelnine
    I'm currently somewhat at a loss of how to create a snapshot of a single virtual hard-disk of a running Hyper-V VM. Generally, creating a differential disk while a server is shut down is no problem (i.e., call the new-vhd cmdlet and pass a ParentPath, then update the VHD-binding of the respective VM-device), but while the host is running, all I can find is checkpointing the VM as a whole (which creates snapshots of all attached disks), and leaves the VM-state in a form which isn't easily processable by external tools (i.e., it requires reading additional meta-data from the VM). Generally, what'd I'd like to happen for a single-disk snapshot (in my understanding) is: Pause the VM Rename current disk to some other name which specifies it as a base-snapshot Create a new VHD which has the renamed VHD as parent path and is marked as "current" Swap the VHD for the VM for the snapshotted hard-disk to the newly created differential VHD Resume the VM Is there any means to do this programatically? Update: I've seen that this is actually possible with SCSI-disks, i.e. pause the VM, remove the SCSI disk, make the snapshot, reattach the SCSI disk at the same position, resume the VM. And, the VM resumes properly. But: is something similar also possible with G1 machines for the boot disk which is always IDE?

    Read the article

  • DNS resolve .com domain on local domain

    - by Joost Verdaasdonk
    I'm building a local 2008 R2 domain as a test case to be able to write a roadmap for the real new domain that needs to be created soon. What I would like to know if I'm able to make a record in DNS that will point the domain name: www.example.com and example.com to one of the servers in my network. I tried creating an a-record for it but that doesn't work. To be honest I'm not even sure if this is possible? So can I do this? That way I would be able to fully test all our services (and webb app) offline before I build the real domain and switch the DNS records at the provider. Some advice if possible and where to start is appreciated. The solution (Thanks Brent): Create new Forward lookup zone pointing to example.com Create empty A record pointing to IP of the webserver you are targeting If www is needed create A record with Name: www and IP of your webserver sub domains repeat the process but then with names for example: sub or www.sub (and ip your webserver) Be aware of the DNS Cache while you are in this process. Things can take time or do the following: Right click the server and choose clear cache in CMD: ipconfig /flushdns (to flush the client cache)

    Read the article

  • Openfire: Granular alerts

    - by R.S.
    Our organization has had an Openfire server up and running for about a year now. So far we have used it for messaging in the I.T. Dept and Alerts to all users. We hit a snag this week when one system went down and several notifications were sent out to inform users of progress. Some of the users were Radiologists that do not use the particular system in question and these users found it more of an annoyance than informative. Since that I have been tasked with finding a more granular system for alerts. I am confident that Openfire can handle this and I have just about settled on a way of getting this to work. My idea is to create a half dozen or so users. For example: Staff, Doctor, Assitant and Supervisor. Using spark as our messenger has worked great so far so I would like to stick with that if possible. With that in mind, under advanced login features the resource name can be changed to something unique and non-unique users can log in under the same account, however, when a message is sent to one of these users, the message delivery is inconsistent. Currently I have 4 users under the Assistant user and it seems only 1 of the users receives the messages. Is this scenario even possible? I am avoiding working with the groups in Openfire because the function is atrocious. I could possibly integrate the system into our Active directory but I don’t think that will get us to a workable solution any quicker or more efficiently.

    Read the article

  • Inheriting file ownership on linux

    - by John Hunt
    We have an ongoing problem here at work. We have a lot of websites set up on shared hosts, our cms writes many files to these sites and allows users of the sites to upload files etc.. The problem is that when a user uploads a file on the site the owner of that file becomes the webserver and therefore prevents us being able to change permissions etc via FTP. There are a few work arounds, but really what we need is a way to set a sticky owner if that's possible on new files and directories that are created on the server. Eg, rather than php writing the file as user apache it takes on the owner of the parent directory. I'm not sure if this is possible (I've never seen it done.) Any ideas? We're obviously not going to get a login for apache to the server, and I doubt we could get into the apache group either. Perhaps we need a way of allowing apache to set at least the group of a file, that way we could set the group to our ftp user in php and set 664 and 775 for any files that are written? Cheers, John.

    Read the article

  • Exchange 2010 - Certificate error on internal Outlook 2013 connections

    - by Lorenz Meyer
    I have an Exchange 2010 and Outlook 2003. The exchange server has a wildcard SSL certificate installed *.domain.com, (for use with autodiscover.domain.com and mail.domain.com). The local fqdn of the Exchange server is exch.domain.local. With this configuration there is no problem. Now I started upgrading all Outlook 2003 to Outlook 2013, and I start to get consistently a certificate error in Outlook : The Name on the security certificate is invalid or does not match the name of the site I understand why I get that error: Outlook 2013 is connecting to exch.domain.local while the certificate is for *.domain.com. I was ready to buy a SAN (Subject Alternate Names) Certificate, that contains the three domains exch.domain.local, mail.domain.com, autodiscover.domain.com. But there is a hindrance: the certificate provider (in my case Godaddy) requires that the domain is validated as being our property. Now it is not possible for an internal domain that is not accessible from the internet. So this turns out not to be an option. Create self-signed SAN certificate with an Enterprise CA is an other option that is barely viable: There would be certificate error with every access to webmail, and I had to install the certificate on all Outlook clients. What is a recommended viable solution ? Is it possible to disable certificate checking in Outlook ? Or how could I change the Exchange server configuration so that the public domain name is used for all connections ? Or is there another solution I'm not thinking of ? Any advice is welcome.

    Read the article

  • Transfer disk contents *without* cloning tools

    - by Chris Cummins
    Is it possible to "clone" a disk which contains programs by performing a copy of all the disk contents (preserving file attributes) from source to destination disk, and unplugging the source disk and changing the drive letter of the destination disk to match that of the source? Context I have a two disk Windows 8 system with a system drive and a data drive. Recently, the data drive developed a number of bad sectors leading to IO errors. I have been sent a replacement drive so I simply need to clone the contents of this data drive onto the replacement. The drive contents include documents & media, user folders (My Documents and related), and some programs (games etc). Problem The problem is that the bad sectors on the source disk causes most disk cloning tools to fail with read errors. Attempted approaches include: Disk clone from live boot environment with Acronis True Image. Fails due to read errors. Disk clone from live boot environment with Clonezilla. Fails due to read errors. Disk clone using Roadkil's Unstoppable Copier. Fails due to hardware timeouts in the HDD (application hangs indefinitely). A straightforward copy from source to destination disk using FreeFileSync (preserving file attributes and metadata). This succeeds. So at the moment I have a replacement disk which contains all of the data from the original disk. Now all I need to is somehow get Windows to replace all references to the old disk to the new one. Is this possible by simply swapping the assigned drive letters? Any help would be greatly appreciated, thanks!

    Read the article

  • Configuring MySQL for Power Failure

    - by Farrukh Arshad
    I have absolutely no experience with databases and MySql. Now the problem is I have an embedded device running a MySQL database with a web based application. The problem is when I shutdown my embedded device it just cut off the power, and I can not have a controlled shutdown. Given this situation how can I configure MySql to prevent it from failures and in case of a failure, I should have maximum support to recover my database. While searching this, I came across InnoDB Engine as well as some configuration options to set like sync_binlog=1 & innodb_flush_log_at_trx_commit=1. I have noticed my default Engine is InnoDB and binary logs are also enabled. What are other configurations to make for best possible failure & recovery support. Updated: I will be using InnoDB engine which supports Transactions. My question is how best I can configure it (InnoDB + MySQL) so that it can provide best possible fail-safe as well as crash recovery mechanism. One configuration option I came across is to enable binary logging which InnoDB uses at the time of recovery. Regards, Farrukh Arshad

    Read the article

  • How to provide users with isolated drive letters in Windows 2008 R2 (Terminal Server)

    - by Pierre
    I need to be able to host several RDP sessions on a Terminal Server, where users of group A see a drive X: mapped to a given folder of the server and another group B see the same drive letter X: mapped to another folder. For instance : User 1, Group A X: --> C:\data\A User 2, Group A X: --> C:\data\A User 3, Group B X: --> C:\data\B User 4, Group C X: --> C:\data\C Is this possible. If so, how do I configure the virtual drive mapping so that the user has nothing special to do; i.e. I want the letter X: to be available to Remote Apps launched by the user, or if the user logs in to the remote desktop. Can I somehow use subst to get this to work? I would like to avoid, if possible, mounting drive letters on local shares (i.e. I don't like the idea of having to go through \\localhost\data-A to reach the user's data).

    Read the article

  • Interaction between two Clouds

    - by Snehal Masne
    I have setup the Cloud-A with 1 - [CLC+CC] and 2 - [NC] computers. I have another Cloud-B with same configuration. [using the Ubuntu Enterprise Cloud] Both of them working fine individually, in the same LAN. Now if I want to add the NC of Cloud-A to CC of Cloud-B, [in case the resources of Cloud-B are exhausted] how can I make it possible ? I guess this calls for the interoperability stuff... Could you please explain what happens exactly when we ask for instance, the direct interaction happens between the client and NC or it goes through the CLC and CC ? What I want to say is, say there are multiple cloud providers. A user is subscribed to any one of them, say Cloud-A for IaaS. As the requirements are dynamic, all the resources of Cloud-A may get exhausted. There may be another Cloud-B which can provide the services but that Cloud-A can't ask the client to go for Cloud-B. So if it is possible to have some co-ordination between this two providers to share resources mutually, making client fully unaware of whats going on in the background....? Please reply.. I am sorry if I'm doing mistake anywhere... Thanks in advance :) Regards, www.TechProceed.com

    Read the article

  • Discover the public ip of a network without being connected

    - by Martin Trigaux
    Let say, I'm next to a network and can see the traffic (with airodump or similar tool) but can not decipher it (because I am not connected on the network). Is it possible to discover the public ip address of the network ? I know the MAC address of the users connected on the network but do I know the one of the router ? If yes, maybe there is a way to do the matching. I know IP addresses are not forever but some addresses are static and never change. Maybe there is a database of MAC address having recorded that. Google has a database that match MAC address and geographical coordinates so why not with IP addresses ? Other idea, if I know where am I, I can maybe guess the IP range used in the city by the ISP (is it findable ?) and then try to "ping" each IP on the range (if it is a /24, it's possible, even /16 maybe). Will I get some information like the MAC of the box or see some traffic on the network ? These are two ideas I had. I don't know if they are doable, certainly not perfect. Do you think of some others ? By trying several methods, maybe I can get a guess with a bit of luck. Thank you

    Read the article

  • How to setup RAID 1 with Intel RST on an existing Windows 7 system?

    - by instcode
    I'd like to setup RAID-1 using Intel Rapid Storage Technology on my Windows 7 64-bit system. I have an 1TB SATA HDD with Windows 7 system installed on the first primary partition (leftmost, ~200GB). The rest of this HDD is unallocated (~800GB). I bought another 2TB SATA, then created a primary partition (leftmost, ~500GB) and filled my data in. The rest of this HDD is unallocated (~1.5TB). A quick disk layout (XXX is the unallocated region): HDD1 (1TB): [ 200GB C:\ SYSTEM | XXXXXXXXXXXX ] HDD2 (2TB): [ 500GB Z:\ PROGRAM | XXXXXXXXXXXXXXXXXXXXXX ] Now, I want to create a 500GB RAID-1 partition (I'm not sure if using "partition" is correct here) on the rightmost of the 2 HDDs above without losing any existing data from both disks. Here is the expected layout: HDD1 (1TB): [ 200GB C:\ SYSTEM | XXXXXX | 500GB D:\ DATA - RAID-1 ] HDD2 (2TB): [ 500GB Z:\ PROGRAM | XXXXXXXXXXXXXXXX | 500GB D:\ DATA RAID-1] Let's not concern about data lost, is it possible to have that final layout using Intel RST? Previously, I tried this layout using dynamic disk & software RAID from Windows and it worked as expected, however, it's quite ugly in resynching after an OS failure that I don't want. If yes, is there a way to keep the data on existing partitions untouched or, at least, it should keep the SYSTEM partition safe (I'm okay if the PROGRAM partition has to be gone.)? Well, are there any strict/special steps I should follow when using the Intel RST manager in order to achieve that? If none of those questions above are "Yes", could you please suggest some other possible layouts that leave the C:SYSTEM partition untouched?

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >