Search Results

Search found 19471 results on 779 pages for 'network troubleshooting'.

Page 533/779 | < Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >

  • qemu command not running directly

    - by Dr. Death
    Can I use "qemu://localhost/system " command directly inplace of "virsh -c qemu://localhost/system " command if my both machines are physically connected in a network as virsh will simply creating the virtual shell between two systems? can I use ssh in place of virtual shell here? I tried this but system gives no directory or file name for qemu even when i had qemu installed properly in my system. but when i use virsh i did not get such errors. Do i need to open any unix socket for doing this?

    Read the article

  • Help changing MAC address in Windows 7 [closed]

    - by Niphoet
    Possible Duplicate: Change MAC Address I need to change the MAC address of my wireless adapter in Windows 7 (Ultimate RTM). I used to do this in XP both directly in the registry editor and with a .REG file I wrote. I have used each of these methods in Windows 7, as well as a few tools I found that are supposed to do this. Every time I change it, I disable and re-enable the network adapter in control panel, but upon running ipconfig /all it still shows my old MAC address. Any help? By the way, I do have Administrative Rights and UAC turned off.

    Read the article

  • Ubuntu boots into command line instead of X.

    - by Ethan Turkeltaub
    I posted this on the Ubuntu forums and they had no good answer. I hope you guys have a solution! On my relatively new install, it's booting into command line instead of X--again. This is the reason I reinstalled in the first place. This has happened to me three times now. So, I boot up and it gets past GRUB, past the glowing Ubuntu option, then it prompts me for my username, then password. I run: startx And that starts the GUI for about a minute, then it runs the GUI login system. To add to the mess, the network-applet is not shown in the panel. Additionally, Chrome will not launch (I ran Firefox from the terminal). What's the problem here?

    Read the article

  • Why every change in Asus UX50v causes screen resolution get lost?

    - by Kaveh Shahbazian
    Why every change in Asus UX50v causes screen resolution get lost? Installing a new application, connecting to another wireless network, change some settings, ... causes this problem. For example after installing an application, UX50v needs to restart. And when it got restarted, resolution would be set to 640x480(or 600x800) and Hibernate and Sleep options are disappeared from shutdown menu! (I have other problems with this Asus UX50v crap too - like I can't update windows 7 because it crashes on Asus UX50v crap - but this one is absolutely ridiculous and stupid!)

    Read the article

  • how connect to local mysql server (LAN)

    - by clarkk
    I got two Debian 6 servers - one for web and one for the database.. How can I connect through the local area network? On both servers I have permanently changed the hostnames /etc/hostname /etc/hosts web => web-server db => db-server In the privileges in mysql I have set the root user to accept requests from web-server (instead of localhost) and from the web-server I connect to db-server in my.cnf I have escaped the following line: # bind-address = 127.0.0.1 error Warning: mysqli::mysqli(): (HY000/2005): Unknown MySQL server host 'db-server' (1)

    Read the article

  • Problem Assigning Static IP to CentOS Server

    - by nategood
    We have a sandbox server running CentOS that we run inside our office. Our ISP has assigned us a block of 5 static IPs. We now want to assign it a static IP. DEVICE=eth0 BOOTPROTO=none # have also tried "static" here HWADDR=00:13:72:*:*:* ONBOOT=yes TYPE=Ethernet NETMASK=255.255.255.0 IPADDR=173.*.*.161 GATEWAY=10.1.10.1 /etc/resolv.cnf is also set with the appropriate name servers from our ISP. When I ifdown eth0 then ifup eth0 I get... SIOCADDRT: Network is unreachable When I switch to DCHP, the machine has an IP assigned and there are no connection problems. Any ideas?

    Read the article

  • Sophos Enterprise Console 4.5, Mac Client 7 Not Auto-Populating SEC Info

    - by user65712
    I have Sophos Endpoint Security and Control, which includes Sophos Enterprise Console (SEC). I'm currently running version 4.5 of SEC, which is an older version. I subscribe to Mac updates, and SEC generates a binary Mac installer for me to use on Mac endpoints (Version 7 for Mac, also an older version). However, when I run the installer on Mac endpoints, it installs fine but then never auto-fills out the location of the update server, which is on a network share, and the account credentials used to access it, which I do not know and were generated by Sophos automatically. Previously, I had been able to use the SEC-generated installer to install and run Sophos on a Mac seamlessly; the update location information and account credentials were automatically filled during login, I ran the installer and it was perfectly set up. Now, however, Sophos installs on a Mac but never updates because it doesn't have the update location OR credentials. Has anyone else run across this problem or know why it is happening? Sophos Enterprise Console 4.5.1.0

    Read the article

  • Setting up a vpn and IIS IP address restrictions

    - by carpat
    I'm trying to get a VPN set up with internal access only sites. I have set up a VPN on a windows server (single VPS server), and I can connect from a remote computer and I get an IP assigned correctly (from 192.168.1.1 - 255) Next I configured IIS (running on the same machine) IP Address and Domain Restrictions to only allow only IP address range 192.168.1.0 with subnet mask 255.255.255.0 When I connect to the VPN with "Use Default Gateway on Remote Network" (so that requests must go through the vpn), I get a 403 from the internal sites. What did I miss?

    Read the article

  • DTrace for Oracle Linux news: new beta release and conference appearances

    - by Lenz Grimmer
    A new set of RPM packages of our port of DTrace for Linux has just been published on the Unbreakable Linux Network. This is another beta release of our ongoing development effort to bring the DTrace framework to Linux. This release includes the following changes: The packages are now based on the final public release of the Unbreakable Enterprise Kernel Release 2 (2.6.39). The previous beta drop was based on a development version of the 2.6.39 kernel; there is no new functionality specific to DTrace in this release. The primary goal was to get the code base in sync with the released kernel version. Based on the feedback we received from some users in how their applications interact with dtrace, libdtrace is now a shared library. However, the API/ABI is not fully stabilized yet and may be subject to change. As a result of the ongoing QA testing, some test cases were reorganized into their own subdirectories, which allows running the test suite in a more fine-grained manner. As reminder, we have a dedicated Forum for DTrace on Linux, to discuss your experiences with this release. This week, the Linux DTrace team also attendeded the second dtrace.conf in San Francisco, to talk about their work. The sessions were streamed live and recordings are also available. You can watch Oracle's Kris Van Hees' talk below: Video streaming by Ustream We would like to thank the dtrace.conf organizers for the speaking opportunity and for organizing this event! This Wednesday (April 4th), Kris and Elena Zannoni also spoke on this topic at the Linux Foundation Collaboration Summit 2012 in San Francisco, CA. The slides are now available for download (PDF).

    Read the article

  • First Step Towards Rapid Enterprise Application Deployment

    - by Antoinette O'Sullivan
    Take Oracle VM Server for x86 training as a first step towards deploying enterprise applications rapidly. You have a choice between the following instructor-led training: Oracle VM with Oracle VM Server for x86 1-day Seminar. Take this course from your own desk on one of the 300 events on the schedule. This seminar tells you how to build a virtualization platform using the Oracle VM Manager and Oracle VM Server for x86 and to sustain the deployment of highly configurable, inter-connected virtual machines. Oracle VM Administration: Oracle VM Server for x86 3-day hands on course. This course teaches you how to build a virtualization platform using the Oracle VM Manager and Oracle VM Server for x86. You learn how deploy and manage highly configurable, inter-connected virtual machines. The course teaches you how to install and configure Oracle VM Server for x86 as well as details of network and storage configuration, pool and repository creation, and virtual machine management.Take this course from your own desk on one of the 450 events on the schedule. You can also take this course in an Oracle classroom on one of the following events:  Location  Date  Delivery Language  Istanbul, Turkey  12 November 2012  Turkish  Wellington, New Zealand  10 Dec 2012  English  Roseveille, United States  19 November 2012  English  Warsaw, Poland  17 October 2012  Polish  Paris, France  17 October 2012  French  Paris, France  21 November 2012  French  Dusseldorfm Germany  5 November 2012  German For more information on Oracle's Virtualization courses see http://oracle.com/education/vm

    Read the article

  • Oracle Introduces Oracle Communications Data Model to Provide Actionable Insight for Communications

    - by kimberly.billings
    To help communications service providers (CSPs) manage and analyze rapidly growing data volumes cost effectively, Oracle recently introduced the Oracle Communications Data Model (OCDM). With the OCDM, CSPs can achieve rapid time to value by quickly implementing a standards-based enterprise data warehouse that features communications industry-specific reporting, analytics and data mining. The combination of the OCDM, Oracle Exadata and the Oracle Business Intelligence (BI) Foundation represents the most comprehensive data warehouse and BI solution for the communications industry. Hong Kong Broadband Network, the fastest growing and second largest broadband service provider in Hong Kong, enhanced its data warehouse using Oracle Communications Data Model. It went live with OCDM within three months, and has increased its subscriber base by 37 percent in six months and reduced customer churn to less than one percent. Read more about HKBN's use of OCDM. Read more about OCDM var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Computer Science Degree or Computer Engineering Degree?

    - by Paul
    Hello everyone, I'm 23 years old living in Italy and this year I will be getting my high school diploma. I'm interested in pursuing a collage degree and work in the IT field. At the moment I'm self teaching myself Java (I also know python, html, css and mysql). I'm also learning about algorithms and OO design. I'm curious how important a college degree is for me, considering my age and if there is a big difference between computer science and computer engineer. There is a computer science university where I currently live but not a computer engineer one. For some reason universities that offer computer engineering courses are only in bigger cities such as Milan, Bologna, Roma. Cost wise, it would be cheaper for me to study near home at a computer science school. Career wise, would a computer engineering university offer me more work opportunities instead of a computer science degree ? Is it easier transiting from CS to CEN or vice-versa? I'm not exactly sure what type of job I want to pursue in the future since I'm still a bit undecided but definitely not system/network administrator, database administrator, game developer.

    Read the article

  • CentOS running inside VMware as WebServer times out on outside connection

    - by Tom Hart
    I have a CentOS machine running inside VMware, and I have got PHP and Apache set up on it, so if I open a browser (on the VM) and go to either localhost, or 192.168.0.3, I get a phpinfo page I made in /var/www/html/index.php, but, if on the host (Windows 7), in my browser I go to 192.168.0.3, it times out. I can ping the IP address from Windows and get a response, I just can't through the browser. Does anyone have any ideas what I need to do to get this working? This is my first time using a VM and I'm getting lost in the network settings.

    Read the article

  • Start daemon after specific samba share is mounted

    - by getack
    I have a homebrew headless NAS running 12.04. In it I have a bunch of disks that are presented as a samba share thanks to Greyhole. If I want to do anything to the files within this share, I must do it through greyhole so that everything is updated properly. Thus, the share must be mounted locally and then accessed from there if I want to work on the files from the local machine. I do this mounting automatically thanks to these instructions. I also have Deluge installed that takes care of all my torrenting needs. Deluge's default download location is in this share, so that all the downloads are immediately available to the rest of the network. Obviously for everything to work, the share must be mounted, otherwise Deluge is going to have a problem downloading to it. The problem is, it seems like Deluge is starting before the shares are mounted when the system boots. So downloading/seeding does not continue automatically after boot. I have to log in and force a manual rescan and start on each torrent otherwise all the torrents just hangs on the error. Is there a way I can make deluge start after the shares got properly mounted? I looked into Upstart's emits functionality but I cannot seem to get it to work properly. Any advice?

    Read the article

  • Flex 4 + Apache Ant, Cannot Load FlashPunk Libraries

    - by SquareCrow
    I have been searching google, Apache Docs*, and FlashPunk forums looking for an answer to this: I cannot get Ant/Flex to find and compile the FlashPunk libraries. Here is my build.xml. [code] <!-- Fetch the JAR full of Flex tasks if it is not already in the source directory --> <copy file="${FLEX_HOME}/ant/lib/flexTasks.jar" todir="${SOURCE_PATH}"/> <!-- Add flextasks to the project --> <taskdef resource="flexTasks.tasks" classpath="${SOURCE_PATH}/flexTasks.jar"></taskdef> <!-- Release build Flash Player 10.1 --> <target name="build"> <!-- Build the FlashPunk library --> <echo message="building swc..." /> <compc output="FlashPunk.swc" keep-generated-actionscript="false" incremental="false" optimize="false" debug="true" use-network="false"> <include-sources dir="${FLASHPUNK_PATH}/net" includes="**/* flashpunk/utils/* flashpunk/masks/*" excludes="**/*.TTF **/*.png"/> <load-config filename="${FLEX_HOME}/frameworks/flex-config.xml"/> </compc> <echo message="building swf..." /> <mxmlc file="${SOURCE_PATH}/epOne.as" output="${OUTPUT_PATH}/epOne.swf" debug="false" incremental="false" strict="true" accessible="false" link-report="link_report.xml" static-link-runtime-shared-libraries="true"> <optimize>true</optimize> </mxmlc> </target> [/code] Results in many errors of the type "Definition net.flashpunk.masks:Grid could not be found" even though when I open the directories I can see the *.AS files right there. Sorry if this is very basic. I am piecing together knowledge of Ant from docs and tutorials. *I decided to use Ant because neither FlashDevelop for Windows nor Eclipse for Linux seemto work for me.

    Read the article

  • How to manage credentials on multiserver environment

    - by rush
    I have a some software that uses its own encrypted file for password storage ( such as ftp, web and other passwords to login to external systems, there is no way to use certificates ). On each server I've several instances of this software, each instance has its own password file. At the moment number of servers is permanently growing and it's getting harder and harder to manage all passwords on all instances up to date. Unfortunately, some servers are in cegregated network and there is no access from them to some centralized storage, but it works vice versa. My first idea was to create a git repository, encrypt each password with gpg and store it there and deliver it within deployment system, but security team was not satisfied with this idea and as it is insecure to store passwords in repository even in encrypted view ( from their words ). Nothing similar comes to my mind. Is there any way to implement safe and secure password storage with minimal effort to manage all passwords up-to-date? ps. if that matters I've red hat everywhere.

    Read the article

  • Ubuntu VM Guest - Samba Service Not Accessible from VM Host via Hostname

    - by phalacee
    I have a Windows 7 Workstation with a Ubuntu 10.10 VM running in Virtual Box 3.2.12 r68302. I recently updated Samba and winbind, and since the update, I am unable to access the machine via it's hostname (\mystique) from the VM Host. I can access it by the "Host-only" IP (\192.168.56.101) and the DHCP Assigned IP address (\10.1.1.20) and I can connect to the webserver on the machine via it's hostname (http://mystique/). As stated, accessing this machine via it's hostname worked fine prior to the update, but has since stopped working. I have added the hostname to the smb.conf for the netbios name, to no avail. My smb.conf [global] section looks like this: workgroup = NETWORK netbios name = Mystique server string = %h server (Samba, Ubuntu) dns proxy = no log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user usershare allow guests = yes

    Read the article

  • Internet cafe software for linux

    - by pehrs
    I have gotten a request to roll out a total of 8 internet cafe's in a large network. Budget is non-existent as it will all be done for a non-profit. I was planing to use Ubuntu and live-cds to minimize the amount of management required, but I can't seem to find any suitable internet cafe system that is Ubuntu based. The requirements are pretty basic: It needs to keep track of logged in time and log out users when their time it up. No billing will be done, it will just be used to ensure people can share the computers fairly. It should be possible to force logout from a central system. Users will be unskilled, so it has to have a GUI. What (preferably free, considering the shoe-string budget) software would you suggest to manage this?

    Read the article

  • Set-and-forget Windows backup software with NAS-support?

    - by Evert
    I am looking for set-and-forget backup software for Windows (Vista & 7, and if possible XP/2003). The idea is that it runs in the background on the clients, and does its thing towards a network-share. In case the HDD of one of these clients spontaneously combusts, all I want to have to do is: replace the drive, insert a USB-stick, boot from it, and restore the machine. It should support drives which use [ICH]-RAID. What are my options here? It looks like WHS meets all the requirements, but I am curious about my other options here.

    Read the article

  • SQL SERVER – Faster SQL Server Databases and Applications – Power and Control with SafePeak Caching Options

    - by Pinal Dave
    Update: This blog post is written based on the SafePeak, which is available for free download. Today, I’d like to examine more closely one of my preferred technologies for accelerating SQL Server databases, SafePeak. Safepeak’s software provides a variety of advanced data caching options, techniques and tools to accelerate the performance and scalability of SQL Server databases and applications. I’d like to look more closely at some of these options, as some of these capabilities could help you address lagging database and performance on your systems. To better understand the available options, it is best to start by understanding the difference between the usual “Basic Caching” vs. SafePeak’s “Dynamic Caching”. Basic Caching Basic Caching (or the stale and static cache) is an ability to put the results from a query into cache for a certain period of time. It is based on TTL, or Time-to-live, and is designed to stay in cache no matter what happens to the data. For example, although the actual data can be modified due to DML commands (update/insert/delete), the cache will still hold the same obsolete query data. Meaning that with the Basic Caching is really static / stale cache.  As you can tell, this approach has its limitations. Dynamic Caching Dynamic Caching (or the non-stale cache) is an ability to put the results from a query into cache while maintaining the cache transaction awareness looking for possible data modifications. The modifications can come as a result of: DML commands (update/insert/delete), indirect modifications due to triggers on other tables, executions of stored procedures with internal DML commands complex cases of stored procedures with multiple levels of internal stored procedures logic. When data modification commands arrive, the caching system identifies the related cache items and evicts them from cache immediately. In the dynamic caching option the TTL setting still exists, although its importance is reduced, since the main factor for cache invalidation (or cache eviction) become the actual data updates commands. Now that we have a basic understanding of the differences between “basic” and “dynamic” caching, let’s dive in deeper. SafePeak: A comprehensive and versatile caching platform SafePeak comes with a wide range of caching options. Some of SafePeak’s caching options are automated, while others require manual configuration. Together they provide a complete solution for IT and Data managers to reach excellent performance acceleration and application scalability for  a wide range of business cases and applications. Automated caching of SQL Queries: Fully/semi-automated caching of all “read” SQL queries, containing any types of data, including Blobs, XMLs, Texts as well as all other standard data types. SafePeak automatically analyzes the incoming queries, categorizes them into SQL Patterns, identifying directly and indirectly accessed tables, views, functions and stored procedures; Automated caching of Stored Procedures: Fully or semi-automated caching of all read” stored procedures, including procedures with complex sub-procedure logic as well as procedures with complex dynamic SQL code. All procedures are analyzed in advance by SafePeak’s  Metadata-Learning process, their SQL schemas are parsed – resulting with a full understanding of the underlying code, objects dependencies (tables, views, functions, sub-procedures) enabling automated or semi-automated (manually review and activate by a mouse-click) cache activation, with full understanding of the transaction logic for cache real-time invalidation; Transaction aware cache: Automated cache awareness for SQL transactions (SQL and in-procs); Dynamic SQL Caching: Procedures with dynamic SQL are pre-parsed, enabling easy cache configuration, eliminating SQL Server load for parsing time and delivering high response time value even in most complicated use-cases; Fully Automated Caching: SQL Patterns (including SQL queries and stored procedures) that are categorized by SafePeak as “read and deterministic” are automatically activated for caching; Semi-Automated Caching: SQL Patterns categorized as “Read and Non deterministic” are patterns of SQL queries and stored procedures that contain reference to non-deterministic functions, like getdate(). Such SQL Patterns are reviewed by the SafePeak administrator and in usually most of them are activated manually for caching (point and click activation); Fully Dynamic Caching: Automated detection of all dependent tables in each SQL Pattern, with automated real-time eviction of the relevant cache items in the event of “write” commands (a DML or a stored procedure) to one of relevant tables. A default setting; Semi Dynamic Caching: A manual cache configuration option enabling reducing the sensitivity of specific SQL Patterns to “write” commands to certain tables/views. An optimization technique relevant for cases when the query data is either known to be static (like archive order details), or when the application sensitivity to fresh data is not critical and can be stale for short period of time (gaining better performance and reduced load); Scheduled Cache Eviction: A manual cache configuration option enabling scheduling SQL Pattern cache eviction based on certain time(s) during a day. A very useful optimization technique when (for example) certain SQL Patterns can be cached but are time sensitive. Example: “select customers that today is their birthday”, an SQL with getdate() function, which can and should be cached, but the data stays relevant only until 00:00 (midnight); Parsing Exceptions Management: Stored procedures that were not fully parsed by SafePeak (due to too complex dynamic SQL or unfamiliar syntax), are signed as “Dynamic Objects” with highest transaction safety settings (such as: Full global cache eviction, DDL Check = lock cache and check for schema changes, and more). The SafePeak solution points the user to the Dynamic Objects that are important for cache effectiveness, provides easy configuration interface, allowing you to improve cache hits and reduce cache global evictions. Usually this is the first configuration in a deployment; Overriding Settings of Stored Procedures: Override the settings of stored procedures (or other object types) for cache optimization. For example, in case a stored procedure SP1 has an “insert” into table T1, it will not be allowed to be cached. However, it is possible that T1 is just a “logging or instrumentation” table left by developers. By overriding the settings a user can allow caching of the problematic stored procedure; Advanced Cache Warm-Up: Creating an XML-based list of queries and stored procedure (with lists of parameters) for periodically automated pre-fetching and caching. An advanced tool allowing you to handle more rare but very performance sensitive queries pre-fetch them into cache allowing high performance for users’ data access; Configuration Driven by Deep SQL Analytics: All SQL queries are continuously logged and analyzed, providing users with deep SQL Analytics and Performance Monitoring. Reduce troubleshooting from days to minutes with database objects and SQL Patterns heat-map. The performance driven configuration helps you to focus on the most important settings that bring you the highest performance gains. Use of SafePeak SQL Analytics allows continuous performance monitoring and analysis, easy identification of bottlenecks of both real-time and historical data; Cloud Ready: Available for instant deployment on Amazon Web Services (AWS). As you can see, there are many options to configure SafePeak’s SQL Server database and application acceleration caching technology to best fit a lot of situations. If you’re not familiar with their technology, they offer free-trial software you can download that comes with a free “help session” to help get you started. You can access the free trial here. Also, SafePeak is available to use on Amazon Cloud. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • ubuntu automount: only mounting drives as root?

    - by glisignoli
    I'm sharing the /mount dir with smb so users on my network can access use drives added to my linux box. Users are able to read files but not write, modify or delete files or directories. I'm using ubuntu 10.04 server edition with halevt installed for usb auto mounting. Afaik halevt is automounting the drives to /media/ but the drives are showing up as: drwxrwxr-x 1 root root 20480 2010-12-29 20:40 disk drwxrwxr-x 1 root root 24576 2010-12-21 17:20 Sparta mount gives me: /dev/sda1 on /boot type ext2 (rw) /dev/sdb1 on /media/disk type fuseblk (rw,nosuid,nodev,sync,allow_other,blksize=4096,default_permissions) /dev/sdc1 on /media/Sparta type fuseblk (rw,nosuid,nodev,sync,allow_other,blksize=4096,default_permissions) When I umount the drives, the folders /media/disk and /media/Sparta are both removed. I tried changing the permissions with chown to nobody:nogroup but it doesn't work (which I assume is because they are ntfs drives).

    Read the article

  • Connect to SVN repository with Netbeans using SVN+SSH

    - by shuby_rocks
    Hello all, I am trying to connect to a SVN server in order to import my project into it with svn+ssh authentication method. I am using the NetBeans IDE (6.8) with subversion plugin installed on Windows XP SP2. I have plink installed with its path set in the Windows PATH env variable. When I use the similar looking repository URL (XXXX and YYYY replaced with sensible things) svn+ssh://XXXX@YYYY/home/dce/svn/trunk along with this external tunnel command plink -l <myUserName> -i C:\\privateKey.ppk I keep getting this error: org.tigris.subversion.javahl.ClientException: Network connection closed unexpectedly I searched about it on the Internet and tried many things but didn't work out. Please help if anybody has some idea what may be going wrong. Thanks a lot in advance.

    Read the article

  • How to Make Your Verizon FIOS Router 1000% More Secure

    - by The Geek
    If you’ve just switched to Verizon FIOS and they’ve installed the new router in your house, there’s just one problem: it’s set to use lousy WEP encryption by default, instead of the much more secure WPA2. Here’s how to fix it. The problem with WEP encryption is that it can be cracked really easily—a skilled hacker can do it in a few minutes, and even an unskilled geek can do it in just a little more time with the right tools. Once they’ve done that, they can leech off your internet connection and do anything they want—including illegal stuff coming from your network. Note: if you are using an old Nintendo DS connected to the internet, they usually only support WEP encryption, so you may not want to do this Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor The Spam Police Parts 1 and 2 – Goodbye Spammers [Videos] Snow Angels Theme for Windows 7 Exploring the Jungle Ruins Wallpaper Protect Your Privacy When Browsing with Chrome and Iron Browser Free Shipping Day is Friday, December 17, 2010 – National Free Shipping Day Find an Applicable Quote for Any Programming Situation

    Read the article

  • How can I effectively block torrenting?

    - by Chauncellor
    My WNR1000v3 is serving six people and two of them have decided that despite my warnings they're going to torrent heavily all day. Not dealing with that crap I decided to reserve their IPs and set up port blocking 1000-65535 at all times of the day. However.... looking at the log reveals that stuff is still going through. Half of the entries are saying: [LAN access from remote] from <externalIP>:16001 to 192.168.1.7:18946 Friday, Oct 12,2012 22:47:05 and half are saying: [Service blocked: BlockTorrents] from source 192.168.1.7, Friday, Oct 12,2012 22:46:26 Is this because of uPNP? Or does the 'block services' feature Netgear has only work with outgoing connections? Is there something that I'm missing? If it is indeed uPNP, how could I effectively block their torrenting without hurting everyone's use of services like Skype, Playstation Network, etc.?

    Read the article

  • XenServer: Editing clone configuration before boot

    - by Jeff Ferland
    Upon cloning a base image, I need to reconfigure basic settings. Regenerating the ssh host key, changing static IP assignments, setting the host name, etc. Because of the network setup, DHCP is not an option. That more or less rules out SSHing in with a predefined key or running a startup script since I can't provide the IP externally. I'd most like to mount the filesystem of the new machine on Dom0, but the lvm volumes are exported and it appears to be Bad Form to import them so the Dom0 machine can see them. What's your best suggestion for altering files in a cloned VM before boot? Must be non-interactive, and I'm going to guess out the gate that scripting access via xe console is not going to work well.

    Read the article

< Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >