Search Results

Search found 7086 results on 284 pages for 'explain'.

Page 201/284 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • Enterprise Redirection Services?

    - by Aaron Alton
    This is probably a case of "if I new what it was called, I could google it in 5 minutes" - but I don't know what it's called. It's probably best to explain the requirement using an example. We have a number of services (vpn, owa, etc) which we host from one of our datacenters. We have a number of datacenters, and we technically have the infrastructure already in place to support these services at a number of our datacenters. To provide access to these "services", I would create an external DNS entry (ex. VPN.MyCompany.com Gateway IP for one of my DCs), and clients will connect to it via the DNS entry. Since I have multiple datacenters that can support this service, I could theoretically offer a "highly available, geographically dispersed" solution if I could point this DNS entry to some sort of third party who offers highly available "redirection" services. If my primary site goes down, I could just make a change via some management console and configure the redirector to point to a different DC. Of course, it would be fairly straightforward to set this sort of thing up on one of our servers, but that would kinda defeat the purpose of a highly available third party. Is anyone familiar with a service like this? I'm thinking something like DynDNS, but with Enterprise availability guarantees.

    Read the article

  • SSH hangs when executing command remotely

    - by Serty Oan
    Client : OpenSSH_5.1p1 Debian-5ubuntu1 (Ubuntu 9.04) Server : OpenSSH_5.1p1 Debian-5 (Proxmox 2.6.24-7-pve) I use SSH to execute commands remotely on the server (module check_by_ssh of Nagios). But SSH hangs from time to time when trying to execute commands. I can log to the server via SSH but not executing a simple 'ls'. And it seems to block from all clients from the same IP address. Authentication is not the problem, may it be made by SSH keys or password. ssh -l root -p 2222 server.domain.tld 'ls' Here the client debug info debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug1: Sending environment. debug3: Ignored env ORBIT_SOCKETDIR *** skipping approx 40 env var ignored debug1: Sending command: ls debug2: channel 0: request exec confirm 1 It hangs there. Then after a random time, it works again (without doing anything). Killing all sshd process on the server seems to work too. It works from a Putty. I saw that some people had trouble like this due to ISP reverse DNS problem, but it does not seem to be the case here. It can work for hours and then not work for half an hour or so. What could explain this behaviour ?

    Read the article

  • Understanding where an amazon ec2 instance run?

    - by kenzo450D
    I am currently using the aws api from my local desktop. I can successfully take backups of my amazon volumes, and even create an ami from it. Now when i wanted to run the instance to be built from this ami, where does the instance run? In their Elastic Cloud or the computer from which the command was issued. Suppose I want to create the new instance in a new region? (locations as defined in ec2-describe-regions) How would I do that? It seems i have a bad knowledge about how the relation between amazon volumes and instances? Please explain it. I am only allowed to use the CLI tools to do all of my work. I made a new snapshot of the existing instance, made an ami using ec2-register, made a keypair, and then followed these steps, http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-an-instance.html#launching-an-instance-cli but i got an error as this Client.InvalidParameterValue: The requested instance type's architecture (i386) does not match the architecture in the manifest for aki-fc37bacc (x86_64) my local computer is 32bit. But I do not want to load instance on the local computer but on amazon servers?

    Read the article

  • Problem with connecting two different networks

    - by tanascius
    I have two networks: 192.168.13.0/24 (blue) and 192.168.15.0/24 (green). Computer A is connected to the 13-net, only. Computer B has two interfaces, one in each network. There is third computer that acts like a router and connects the 13-net to the 15-net (only in this direction). Now, I'd like to ping 192.168.15.100 from computer A to B. Unfortunately there is never a reply. But when I use a hub instead of a switch it works. In my opinion the ping packet travels through the switch to the router (which is the default route/gateway for A). The router sends the packet back to the switch to B. Probably B receives it on its 15-net interface but answers with it's 15th interface? Is this possible? The problem is, that B may have only a gateway 192.168.13.50 - but I am not really sure of it (B is a embedded system with limited configuration possibilities). Can anyone explain what happens here? Thank you!

    Read the article

  • How to change the MAC address in Win 8 to spoof a Roku Player through a WiFi splash page?

    - by luser droog
    My Linux laptop died yesterday and now I can't watch TV. Let me explain. I use a Roku Player to stream Netflix shows to my television; and a year or two ago, the Internet Service provided in my apartment complex added a Splash Page to get through the router and onto the net. After not too many days, I remembered that internet devices identify themselves with a MAC address. So I delved into the manpage of ifconfig and discovered that I could persuade my laptop to pretend to be the Roku Player, connect, click through the Splash Page, disconnect and change it back. This would allow the Roku to connect for about 24 hours, when I would have to do it again. But the laptop died yesterday during my smoke break. So during lunch, I ran to OfficeMax and got a new one. But I don't know where to begin looking for where to change the MAC address (assuming it's possible). I know I can try dual-boot, or a keychain OS, or possibly other things to resurrect my old method. But, is it possible to get Windows do it?

    Read the article

  • linux: per-process monitor, every 10 minutes, with history access

    - by Inbar Rose
    I really didn't know a better way to ask my question, hence you get a horribly named question. I will explain what i want to do, maybe that will help you help me. I would like to have my linux machine continuously monitor (every 10 minutes) all the processes on my machine. The information from each process that I require is the name, CPU usage, allocated (virtual) memory, and resident (ram) memory. If these periodic reports were to be looked at, they would look something like this: PROCESS CPU RAM VIRTUAL name1 % MB MB name2 % MB MB ...etc..etc These reports should be stored in such a way that I can access them at a later date by giving a date/time scope (range). For instance, if I want to see the history of my processes from 12:00:00 1.12.12 till 12:00:00 2.12.12 I can - and it should give me the history of the processes for every 10 minutes between those date/time borders. The format of the return is not important, that will be handled by a script anyway and can be modified into anything I need. I have looked into a few things so far, but have not found something that clearly meets my needs. Among the things i searched: sar, free(1), top(1).. and a few other things. It should be a simple issue, i can already see all this information by simply looking at my htop, but i need only a tool that will gather the desired fields for me for each processes every 10 minutes, and then also let me extract slices of that data based on date/time scopes (ranges). note: I have limited experience with linux, so please give detailed information. note2: The desired output will be something like this (after receiving the desired range) CPU USAGE BY PROCESS: proc_nameA 1,2,2,2,2,2...... numbers represent % usage every 10 minutes... proc_nameB 4,3,3,6,1,2...... The same idea with the other information.

    Read the article

  • visually documenting web server configuration and infrastructure

    - by Alex Ciarlillo
    I have just finished a large re-organization and update of our institutions web server(s). This server hosts 3 virtual hosts, 3-4 blogs, 2 wikis, some legacy static HTML pages, and many hosted documents (PDF, .jpg, .xls). I have organized the site into a structure of something like: /var/www/sites/vhost1, vhost2, vhost3 .../wordpress/blogX .../mediawiki/wikiX Data is in a seperate directory structure so I can run a cron task over it to make sure it is all writeable and such. I then symlink to these data directories for each application. /var/www/data/vhost1, vhost2, vhost3 .../wordpress/blogX/uploads .../mediawiki/wikiX/images All Apache configs are in /etc/httpd/conf.d/vhosts.d/vhost1,2,3.conf On top of this there is also a testing server which mirrors this setup. Once changes are fully tested, they are rsynced down to the live server. All the wordpress installs and mediawiki installs are straight form SVN and updates are done by switching branches or "svn up". So my question is how can I best document to share with a) co-workers, b) possible future replacement, c) myself 6 months from now. Obviously I can make a wiki page, excel document, whatever and fill it with text, but I am looking for a more visual representation that I can use to explain the architecture to less-technical people. Ideally it would be awesome if this visual representation could then be expanded to get more technical details.

    Read the article

  • DNS redirecting to Apache

    - by leo
    I have CentOS installed on LVM, that is on Debian. There are BIND and Apache on CentOS. I need to access sites from browser on Debian with names like: 1.domain, 2.domain, etc. So I set up Apache and I can access these sites, but using /etc/hosts/ on Debian. And now I'm trying to configure bind. named.conf: zone "domain" IN { type master; file "/var/named/domain.zone"; allow-update { none; }; }; 192.168.100.1 is DNS' ip; 192.168.100.139 is Apache ip; domain.zone: $TTL 86400 @ IN SOA domain. root.domain. ( 100 1H 1M 1W 1D ) @ IN NS ns1.domain. @ IN A 192.168.100.139 ns1 IN A 192.168.100.1 WWW IN A 192.168.100.139 1 IN A 192.168.100.139 2 IN A 192.168.100.139 www.1 IN A 192.168.100.139 www.2 IN A 192.168.100.139 Also, is it necessary to configure 100.168.192.in-addr.arpa? Please, explain me where I'm wrong.

    Read the article

  • Same script, different behavior [migrated]

    - by Antoine_935
    I just stumbled upon an interesting bug... Still trying to figure out what is exactly happening. Maybe you can help. First, the context. I'm currently building yet another man to html converter (for some reasons I won't motivate here, but I need it). So, have a look at the screenshot below (see the link), more precisely at the outlined spots. See? On the upper shell, I have &lt ; and &gt ;, that is, escaped html. While on the shell below I have < and directly. But as you can see (or do I seriously need looking glass ?), the command man 2 semget | webmanneris the same on both sides, as is the which webmanner. The two are executed roughly at the same moment, with no modification made to the script between. [Oops, cannot post pictures just yet... Here comes the link] http://aspyct.org/media/webmanner-bug.png But the shell below is older (open about 1 hour ago). Newer shells all print out &lt ;. So my first guess was that it somehow had a cached reference to the old inode of the file, or old blocks or whatever. So I modified parts of the script, at the start and then at the end, to print different messages. And, surprise, the message shown up on both terminals. But still, same difference between &lt ; and <. I'm confused... How to explain that behavior? I'm working on a OSX 10.8 (Mountain Lion) EDIT: OK, there is one big difference: the shell below uses ruby 1.9.3, while above is 1.8.7. Is there any known difference in string handling between the two versions ?

    Read the article

  • Are two periods allowed in the local-part of an email address?

    - by Mike B
    A third-party email gateway relay is refusing to process a message for an email address we're sending to. The address is in the format of [email protected] (note the two periods). Is this allowed by RFC guidelines? RFC 2822 seems to object to this in section 3.4.1: The locally interpreted string is either a quoted-string or a dot-atom. If the string can be represented as a dot-atom (that is, it contains no characters other than atext characters or "." surrounded by atext characters), then the dot-atom form SHOULD be used and the quoted-string form SHOULD NOT be used. Comments and folding white space SHOULD NOT be used around the "@" in the addr-spec. Furthermore, in that same section, it references this: addr-spec = local-part "@" domain local-part = dot-atom / quoted-string / obs-local-part I interpret this to mean that the localpart can have content separated by dots but there cannot be two successive dots, and it cannot start or end with a dot. That being said, I'm not familiar with dot-atom syntax so maybe I'm mistaken here. Can someone please confirm and explain?

    Read the article

  • Using multiple USB webcams in Linux

    - by rachelderp
    Running more than one USB webcam in Debian/Linux results in the the following error: libv4l2: error turning on stream: No space left on device VIDIOC_STREAMON: No space left on device What initially seemed to be a programming issue in OpenCV turned into a quest for a mysterious hardware/software problem after the same errors were produced by running cheese and xawtv. Apparently it's caused by webcams requesting all the available bandwidth on the USB host controller. With that in mind I decided to run wireshark and capinfos to see just how much bandwidth a single camera used. 4 megabits per second at 320x240 14 megabits per second at 640x480 32 megabits per second at 1920x1080 Interesting! That might explain why two cameras at 320x240 work but any higher resolution fails. It's as if my USB controller is only operating at USB 1 speeds, yet lsusb shows both webcams belonging to a device which supposedly supports 480 megabits per second. One solution proposed forcing the webcams to calculate their bandwidth usage instead of requesting their maximum by running the following commands: sudo rmmod uvcvideo sudo modprobe uvcvideo quirks=128 Unfortunately that made no difference, so I decided to try another solution. A post on StackOverflow suggested telling my webcams to use a lower FPS or compressed video format like MJPEG, but after running v4lctl list it doesn't appear either of my webcams support changing their video mode. And that's where I'm stuck. Why would two webcams operating well below the maximum speed of USB 2 would produce this error? ps: It's not a disk space issue, df displays no change when the webcams are started. pps: If it makes a difference, here's the output of lsusb

    Read the article

  • Revamping an old and unstable IT-solution for a customer?

    - by cmbrnt
    I've been given the cumbersome task to totally redo the IT-infrastructure for a customer's office. They are currently running Windows XP all over, with one computer acting as a file server with no control over which users have access to which files, and so on. To top it off, this file server also functions as a workstation, which means it gets rebooted every time the user notices some sluggish behavior or experiences problems with flash games. To say the least, this isn't working for them. Now - I've got a very slim budget, but I need to set up a new server, and I wish to run Windows Server 2008 on it. I also need the ability to access the network remotely via VPN. Would it be a good idea to install VMware ESXi 4.1 onto the new server, and then run Windows Server 2008 as well as a separate Debian install for openvpn on it? I don't like the Domain Controller for the future AD to also run a VPN-server, because of stability issues when something goes to hell with either of them. There will be no redundancy though. However, I'm not sure if there is something to gain by installing a VPN solution on the Windows Server itself, when it comes to accessing file shares on the network via VPN. I don't know how to enable users logging in via the VPN to access the remote files, since they will be accessing the network from their own home computers (which is indeed a really bad idea, but this is what I've got to work with). They won't be logged in to the windows Domain, but rather their home workgroups. I need to be able to grant access to files in certain directories based on the logged in AD-user, but every computer won't necessarily be configured to log into the domain. I'm not sure how to explain this in a good way, but I'd be happy to clarify if somethings not clear. Any help would be great, because I've got a feeling that I can't do this without introducing a bunch of costly new rules when it comes to their IT-solution. I'd rather leave that untouched and go on my merry way to the next assignment.

    Read the article

  • Gentoo server time issue, can't manually set time, NTPD won't correct, too big of a time difference

    - by kevingreen
    So, my server is living in the future, unfortunately I can't get lottery numbers, or stock picks out of it. It thinks this is the time: Thu Nov 7 04:07:18 EST 2013 Not correct, I tried to set the time manually via date in a few ways # date -s "06 NOV 2013 14:48:00" # date 110614482013 -- same output, same problem Which outputs Wed Nov 6 14:48:00 EST 2013, but when I check the date again, it's still set to Nov 7th 0400 or whatever. I checked my system messages, and I see this pop up often: Nov 7 03:54:00 www ntpd[4482]: time correction of -47927 seconds exceeds sanity limit (1000); set clock manually to the correct UTC time. Which makes sense, we're way off the correct time. But I can't seem to manually fix it. So now what? Also, I'm wondering if my hardware clock is setup correct, hwclock doesn't return any values. Would that be causing issues? This is a virtualized server, I don't have direct access to the hypervisor, but I can talk to who does, assume I can explain myself well enough. Thanks

    Read the article

  • Lost user account for Windows Vista

    - by annelie
    Hello, I'm trying to help a friend who's lost her user account in Vista. I know there's supposed to be a way you can boot the computer from the vista installation disc and create an admin account you can later login with, but her installation disc is in Australia and her laptop in London. Is there any other way to get in? Or would it be better to try and access just the harddrive? She's mainly concerned with getting all her data off it. As for how she lost the account, I'll let her explain in her words. :) My computer basically got some virus and now is up sh*t creek. it told me i had this cryptic thingy majiggy was missing and then this fake virus told me i needed to scan my computer. SO i tried to do malware thing but it kept shutting my computer down. ANYWAY...now its it will only open up with 'launch startup repair' and has got rid of my settings for logging in and wants me to be 'other user' which i have no password or username for'...so basically im stuffed. This is Windows Vista by the way. Thanks, Annelie

    Read the article

  • ec2 spot instance for daily processing task

    - by chaft
    I don't have much experience as a sysadmin or with amazon aws, so I hope someone can explain in simple terms or refer me to a good guide on how to achieve the below. I have a system running on ec2 and amazon rds getting data in and saving it to the db. I need to run a script once a day (at the end of the day) to process all that data and prepare a daily report. This process will take approximately an hour to run. It needs to run on a high memory instance.. From what i've read so far, I guess the best way to do it is to have a high memory spot instance run every day, set it up to execute the script on startup and and shut down when done. Is that the right way to do it? If so, how to do it? how to tell the spot instance to run every day? through a cron job on the other server or is there a better way? How to set it up to run the script on startup? through cloudinit? Any help would be appreciated. One last thing, the job is not very time sensitive as long as it runs every day.. thanks

    Read the article

  • Different behaviour when running exe from command prompt than in windows

    - by valoukh
    Forgive me if I'm not using the correct terminology here but I'll try to explain the issue I'm having as best I can! We use a program that allows you to view video footage from several cameras, and it works in such a way that when it is opened it then automatically loads several video files within its interface. These video files are stored in a subfolder, e.g. "videos\video1.asf", etc. I didn't create the program so I can't say what method it uses to open the files. The files are stored on a network server and are being accessed via a share/UNC path. When the file is run from windows explorer (by navigating to the network share and double-clicking the exe), it works perfectly. When the file is run via the (elevated) command prompt (e.g. by typing the \server\path\to\file.exe) it opens, but the corresponding video files do not load. I am trying to create a script that launches the program via the command prompt, so the first step is finding out why the two actions above have different consequences. Any advice on how running executables from the command prompt could produce a different result would be greatly appreciated. Thanks

    Read the article

  • Trying to diagnose network problem: ping 127.0.0.1 (or any address) results in error code 1

    - by Mnebuerquo
    NIC seems to be working, as windows detects the hardware and has a driver and reports success. DHCP seems to have gotten an ip address, 192.168.1.101. I released and renewed it and it seemed to work normally. I tried ping 127.0.0.1 as first step of testing network configuration. Pinging 127.0.0.1 with 32 bytes of data: PING: transmit failed, error code 1. I read somewhere that net helpmsg [error code] would give a human readable name for the error code. net helpmsg 1 says "Incorrect function" I've tried disabling the firewall and antivirus in McAfee SecurityCenter and I still get the same error. Could the firewall/antivirus be breaking it even if disabled? Broadcom Advanced Control Suite 2 is installed, and its network test passes all tests, including ping 192.168.1.1 which is the default gateway. If I try ping 192.168.1.1 from the command prompt I get the error code 1 again. So does anyone have any theories that would explain this problem? Other tests I should try? Thanks!

    Read the article

  • Am I obliged to use ipv6 tunnel services if I want to be able to use it?

    - by Zagorax
    I was looking for configuring Slackware to use ipv6 but all instruction I found speak about using an ipv6 tunnel that encapsulate ipv6 request into ipv4 packet and send them to an external router that extracts ipv6 request and sends a reply (or, at least, this is what I understood). Is that necessary? Isn't there a way to configure a pure ipv6 system? If yes, could you please point me to a guide that clearly explain how to enable ipv6 without this trick? I would like to configure my Slackware desktop at first, and then do the same with my Centos server. EDIT: maybe I gave you too few information. Sorry. I'll write some more information thanks to the posted guide. ~$ test -f /proc/net/if_inet6 && echo "Running kernel is IPv6 ready" Running kernel is IPv6 ready So, it seems ipv6 is enabled in my kernel. Some other output from ifconfig, route and /etc/resolv.conf content (with opendns): ~$ /sbin/ifconfig wlan0 | grep inet6 inet6 addr: fe80::21f:3bff:fe60:cc5b/64 Scope:Link ~$ /sbin/route -A inet6 | grep wlan0 fe80::/64 :: U 256 0 0 wlan0 ff00::/8 :: U 256 0 0 wlan0 ~$ cat /etc/resolv.conf inet6 nameserver 2620:0:ccc::2 nameserver 208.67.222.222 nameserver 208.67.220.220 But still, with ping6 I can only ping localhost (::1). Everything else is unreachable. Normal ping works fine. That is why I was asking if I am obliged to use a tunnel.

    Read the article

  • IIS 7: launch unique site instance per host name

    - by OlduwanSteve
    Is it possible to configure IIS 7 so that a single site with multiple bindings (or wildcard bindings) will launch a unique instance for each unique host name? To explain why this is desirable, we have an application that retrieves its configuration from a remote system. The behaviour of the application is governed by this configuration and not by the 'web.config'. The application uses its host name as a key to retrieve the configuration. Currently it is a manual process to create an identical IIS site for each instance of the application, differing only by the bindings. My thought, if it were possible, is that it would be nice to have one IIS site that effectively works as a template for an arbitrary number of dynamic sites. Whenever it is accessed by a unique host name a new instance of the site would be launched, and all further requests to that host name would go to that instance just as though I had created the site by hand. I use IIS regularly, but only for fairly straightforward site hosting. I'd like to know if this could be configured with vanilla IIS 7, but would also welcome answers that require a plugin or 3rd party product. Programming/architectural suggestions about changes to the app wouldn't really be appropriate for serverfault.

    Read the article

  • How to boost playback volume in real time on media recorded with a very low volume.

    - by L Marksman
    I have never heard a satisfactory answer to this often misunderstood question, let me explain. Lets say I have a sound card and earphones/speakers that can play back audio loud enough in most cases. This is great but the problem is that you always find people who do not know how to record audio, from Youtube video's to music. So now you end up with a audio playback that only uses 10% or less of the capacity of your sound hardware, in vista/win 7 you will see this frequently in the mixer with the volume pushed up to max but the green sound level only goes up a millimeter or two. I am looking for (preferably free) software or a method to boost the sound level of any audio from any source in real time to use more of my hardware capacity similar to what VLC media player can do. Oh and please, do not tell me it is impossible. I am not trying to boost the volume past what my hardware is capable of, I am just trying to use my hardware's full capacity. Also please do not tell met to buy new hardware, I know I can use hardware amplification, I don't want to (like many others) spend money on a simple little problem like this. Thanks!

    Read the article

  • How to configure SVN server for my own project

    - by user1729952
    I work with a team on an Android project using Eclipse IDE, we need to use a version control and we need to access the repo remotely, I have no experience using or installing servers, a little experience using SVN on Windows, but I still have problems connecting to it remotely. I need to use no-ip.com to change my IP, however; I failed to make VisualSVN server to work with no-ip. What options do I have? The best thing is to get it work with Windows if not, I have another computer that is running Ubuntu 12.4.1, I have installed apache2svn on it trying to get it work, the svn is installed, I went through tutorials to configure accessing protocols, but I can't figure out how to access it remotely from another computer? Can someone tell me the steps I need to get this job done and I can do my search for each step? (Please explain each step as some keywords or phrases I may not be familiar with) EDIT: Also worth noting, that my company has a website hosted on a remote server, can we use it as a repo? and how? It's running Linux

    Read the article

  • How do you get linux to honor setuid directories?

    - by Takigama
    Some time ago while in a conversation in IRC, one user in a channel I was in suggested someone setuid a directory in order for it to inherit the userid on files to solve a problem someone else was having. At the time I spoke up and said "linux doesn't support setuid directories". After that, the person giving the advice showed me a pastebin (http://codepad.org/4In62f13) of his system honouring the setuid permission set on a directory. Just to explain, when i say "linux doesnt support setuid directories" what I mean is that you can go "chmod u+s directory" and it will set the bit on the directory. However, linux (as i understood it) ignores this bit (on directories). Try as I might, I just cant quite replicate that pastebin. Someone suggested to me once that it might be possible to emulate the behaviour with selinux - and playing around with rules, its possible to force a uid on a file, but not from a setuid directory permission (that I can see). Reading around on the internet has been fairly uninformative - most places claim "no, setuid on directories does not work with linux" with the occasional "it can be done under specific circumstances" (such as this: http://arstechnica.com/etc/linux/2003/linux.ars-12032003.html) I dont remember who the original person was, but the original system was a debian 6 system, and the filesystem it was running was xfs mounted with "default,acl". I've tried replicating that, but no luck so far (tried so far with various versions of debian, ubuntu, fedora and centos) Can anyone clue me in on what or how you get a system to honor setuid on a directory?

    Read the article

  • MacMini transmit rate stuck at 11, every other device can connect at full 54Mbit/s?

    - by chum of chance
    I have a MacMini circa 2007 that's getting very low transmit rates via wifi, 8-11. I have other devices that are getting full 54, including a MacBook Air. With everything else off, the MacMini doesn't want to seem to go any faster. Since it has been previously connected to ethernet its entire life, I was wondering if there were some settings I can change to speed up the connection. Option-clicking the network icon gives this read out: PHY Mode: 802.11g Channel: 1 (2.4 Ghz) Security: WPA2 Personal RSSI: -73 Transmit Rate: 11 My new MacBook Air has the following readout: PHY Mode: 802.11n Channel: 1 (2.4 Ghz) Security: WPA2 Personal RSSI: -66 Transmit Rate: 79 Both have full bars and the wireless router is in the same room to eliminate any obstructions from the equation. Could the MacMini be connecting at an older protocol, like 802.11b and be reporting erroneously that it is connected at 802.11g? This would explain why I haven't seen a transmit rate above 11. Any further trouble shooting I can try before buying a new USB 802.11n device? The WiFi router is a DLink DIR-615. I can see other devices, and none, even the other g connected devices, are getting below 30-40 MBit/s. What's going on here?

    Read the article

  • (squid): failed to find or read error text file.

    - by adam
    There is something in our ERR_NO_RELAY that is causing this error to be logged and for the squid process to fail on start up. I can't show you the exact content of the file but I can tell you It has several lines of JavaScript When we remove the JavaScript, the problem goes away. This same file does not cause any issues other 3 instances of squid that we have running internally. All instances of squid came from the same VM images so they should be the same. We are unable to reproduce this issue except on the one box and we are unable to debug more on this box right now because it is running in production. I know these files are interpreted so squid can replace certain values available in the session so it may be that a syntax error caused this issue. That does not explain why we cannot reproduce it on other (virtually the same) images. One difference is that the instance of squid that has the issue was under load when the issue occurred. Any suggestions/insight would be appreciated. thanks!

    Read the article

  • Is Google tracking our web history even when we do not visit Google or if affiliate websites?

    - by Anoyon-12
    I have recently updated to Google Chrome 29.0.1547.76. And when I click a the new tab button there is a new homepage now. Ok. My current settings forbid from third party cookies being set. And I clear all of my browsing data every time I close or well if its been too long browsing. Ok so there is this help dialog that appears first time you open the new tab page. What I did was Cleared all my browsing settings (From Beginning of time) and then again I went to the new tab, the Help dialog appears. And for the third time I did the same and the Same thing Happened. So For the fourth time I cleared all my Browsing data. Clicked open new tab and then. navigate myself to chrome://settings/cookies. and there were So does this mean google is tracking our web History just for using Chrome. I know Its not Illegal because these cookies only appear when you click the new tab Google Chrome 29.0.1547.76. maybe that was the reason google redesigned the entire New:tab page. From this Google is forcing us to allow them track us. I don't want to set another page as my new:tab page. I just want the old one. Google has a long History of invading Privacy without users consent. There was that Safari incident. I am sure you people remember. So can anyone tell me about this issue? I maybe wrong. So please explain.

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >