Search Results

Search found 19525 results on 781 pages for 'say'.

Page 583/781 | < Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >

  • Moving my OpenID from Livejournal to... something else.

    - by T-Boy
    I've actually been an early user of OpenID, although there are still some questions that I've had with OpenID that I've never really had satisfactorily answered. Now, I understand that if I have full control over my domain, I can set it up so that I can delegate the task of authenticating to another OpenID service provider. The problem is, what I'd like to do is to get the Livejournal server to pass the authentication to someone else, instead of having LJ doing it. Preferably what I'd like to do is get Livejournal, when asked by a authenticating provider, say, "No, I don't do it anymore -- go to this address". The plan was that this address would then be in a domain I fully control, which then would pass it on to whichever service provider I choose. I don't even know if I've gotten my understanding of OpenID right, if all this shenanigans are necessary, if my question makes sense, or if it's even possible with a service provider like Livejournal. (tried tagging this with livejournal, and it told me I couldn't, because I don't have enough reputation. Oh well; one must start somewhere. Sorry for the inconvenience!)

    Read the article

  • PowerPoint 2007 slides are only partially converted to PDF since SP3

    - by Tim Pietzcker
    EDIT: Microsoft support has confirmed that it's a bug with PowerPoint 2007 SP3. I have recently encountered a problem with the "Save as PDF/XPS" add-in for PowerPoint 2007. When I use "Save as PDF/XPS" to create a PDF version of my presentation, some slides are only partially included in the resulting PDF file. For example, this: (download the PPTX file here) is reduced to this (in Adobe Reader X or Acrobat Pro X (both 10.1.1)): (download the PDF file here) So far, I have only encountered this with slides that contain animation elements, but which part of the elements remain in the PDF version appears not to have anything to do with the order in which the animated elements appear, so that might just be a coincidence. Update: The problem persists even if I "un-animate" the slides (removing the animation but leaving the previously animated elements intact). When viewing the affected slides in Acrobat Reader, it sometimes complains about the file containing invalid elements, and that I should complain to whoever generated the PDF file... Update 2: I have just installed Office 2007 on a new Windows 7 x64 PC. With the original Office version (12.0.4518.1014 MSO 12.0.6562.5003), a correct PDF file is generated. After installation of SP3 (12.0.6606.1000 SP3 MSO 12.0.6607.1000) a corrupt PDF file is generated. Today's Microsoft Updates (to PowerPoint version 12.0.6654.5000) haven't changed anything, by the way. Update 3: I have opened a tech support incident with Microsoft. They have confirmed the "limitation", as they called it, and it is indeed limited to 2007 SP 3 only. They are going to pass it on to the developers but they can't say when or even if a fix would be forthcoming, so I guess I'll upgrade to 2010...

    Read the article

  • Need help trying to diagnose Symmetrix SAN performance issues

    - by arcain
    I am helping to benchmark hardware for a new SQL Server instance, and the volume presented to the OS for the data files is carved from a set of spindles on a Symmetrix SAN. The server has yet to have SQL Server installed, so the only activity on the box is our benchmarking. Now, our storage engineers say that this volume and it's resources are dedicated to our new server (I don't have access to see the actual SAN config) however the performance benchmarks are troubling. For example, the numbers look good until suddenly, and randomly, we see in our IO benchmarking tool wait times of 100 seconds, and disk queue lengths of 255 in perfmon. This SAN has an 8 GB cache, plus there are other applications besides ours that use the SAN. I'm wondering if (even though the spindles for our volumes should be dedicated to us) the cache may be getting hammered during the performance testing, or perhaps the spindles our volumes are on aren't really dedicated to us. We're not getting much traction from our storage engineers in helping us track down the problem, so if anybody has experience with diagnosing a problem like this and would like to share insights and troubleshooting methodologies, I'd appreciate it.

    Read the article

  • Best way to build / implement a corporate developer Linux distro with multiple kernels?

    - by Garen
    At work we have Linux users who understandably prefer using Ubuntu. Problem is, we also have developer tools that only work with 'officially' supported Linux distributions that use much older 2.6.18 based kernels. (And even if they worked with newer ones, the vendors could always say they won't "support" the software unless it's on one of their 'officially' supported platforms.) We could of course just tell them to use CentOS or something else 2.6.18-based, and I'm sure their response would be something like: "you can take Ubuntu from our cold, dead hands." :) Which brings to me some questions--is there any good/easy/recommended way to run something like Ubuntu as a host VM and Centos 5.x as a guest OS (with which system--Xen,KVM,VMWare, ...?), and then roll that into our own custom internal distribution that could be easily installed? KVM looks like a good high-performance option just recently included in RHEL 5.4, but if hardware support for virtualization like Intel-VT or AMD-V is necessary, then I'd guess only those folks with fairly new PCs will be able to do it. Would be very interested to hear how anyone else has addressed this kind issue. EDIT: The target audience / users of this kind of system would be developers, each one needs to run locally licensed commercial software, so building out some separate beefy central machines isn't an option unfortunately due to license restrictions. Even if that weren't the case, a couple developers could quickly eat up the resources with parallel builds. :) Ideally, I was hoping there was some step-by-step guide out there to build your own pre-built distribution that had e.g. CentOS 5.x and Ubuntu Desktop as a guest.

    Read the article

  • CIFS Mounting Permissions

    - by malco
    I have an issue that I;m going round in circles with, I hope you can help. The Set up: Server 1 (CIFS Client) - CentOS 6.3 AD integrated uing Samba/Winbind & idmap_ad Server 2 (CIFS Server) - CentOS 6.3 AD integrated uing Samba/Winbind & idmap_ad All users (apart from root) are AD authenticated and this, including groups, etc works happily. What's working: I have created a share on Server 2: [share2] path = /srv/samba/share2 writeable = yes Permissions on the share: drwxrwx---. 2 root domain users 4096 Oct 12 09:21 share2 I can log into a Windows machine as user5 (member of domain users) and everything works as it should, for example: If I create a file it shows the correct permissions and attributes on both the MS and the Linux sides. Where I Fall Down: I mount the share on Server 1 using: # mount //server2/share2 /mnt/share2/ -o username=cifsmount,password=blah,domain=blah Or using fstab: //server2/share2 /mnt/share2 cifs credentials=/blah/.creds 0 0 This mounts fine, but.... If I log su, or log onto server 1 as a normal user (say user5) and try to create a file I get: #touch test touch test touch: cannot touch `test': Permission denied Then if I check the folder the file was created but as the cifsmount user: -rw-r--r--. 1 cifsmount domain users 0 Oct 12 09:21 test I can rename, delete, move or copy stuff around as user5, I just can't create anything, what am I doing wrong? I'm guessing it's something to do with the mount action as when I log onto server2 as user5 and access the folder locally it all works as it should. Can anyone point me in the right direction?

    Read the article

  • NTFS 'Owner' missing when accessing hard disk from external USB adapter

    - by trismarck
    I have a hard drive with Windows XP SP3 installed on it. When the drive is connected through the standard SATA connector inside the laptop, everything works as expected. However when I remove the drive from the laptop and connect the drive to the external USB adapter, almost all files / folders lose the 'Owner' field contents. I was wondering why could that be. I've tried two USB adapters and this happens on each. I could take the ownership of all of the files, but this would overwrite the Owner value (the Owner value that is present when the drive is accessed through standard SATA connector in the laptop). //edit: if the hard drive is used through the USB adapter, I can't access most of the files, at least until I take ownership of the files (/folders). This is how it looks like: HDD inside USB adapter: HDD inside laptop: (note the Owner column) //edit: some of the files on the first screenshot have Owner field filled up. That's because I took the ownership of those files / folders to be able to access the files on the hard drive. //edit2: also, if the hard drive is connected through USB adapter and if I've took the ownership of some files by the 'ddd' user, then if i login as a different user (lets say 'eee' user), the owner field is _still_ empty: ddd user: eee user: eee user can't access the 'ddd' folder. Both users have Administrator priviledges.

    Read the article

  • How to include worksheet 3 and 4 in a cell formula provided?

    - by user21255
    I have been kindly given this formula with an explanation on how it works: Insert this formula into the cell B4 of the sheet "Cases": =IF(NOT(ISBLANK('1st'!B25)),'1st'!B25,IF(NOT(ISBLANK(INDIRECT("'2nd'!R" & (ROW($B4)-(COUNTA('1st'!$B:$B)-COUNTA('1st'!$B$1:$B$24))-4+25) & "C" & COLUMN(B4),FALSE))),INDIRECT("'2nd'!R" & (ROW($B4)-(COUNTA('1st'!$B:$B)-COUNTA('1st'!$B$1:$B$24))-4+25) & "C" & COLUMN(B4),FALSE),"")) Copy the formula to the other cells in the worksheet; the relative addresses will adjust automatically. The formula works like this: Check if there is content in 1st. If yes, copy it. If no, find out how many entries there are in 1st in total. (This is done by using the COUNTA function on the whole B column in 1st and subtracting the number of non-empty cells above the actual case data.) Use this information together with the current cells's number to find out the location of the cell that has to be copied from 2nd. Create the address of the cell and use the ISBLANK function on the INDIRECT function with that address to check if the cell is empty. If it is not, use the INDIRECT function again to display it. If it is empty, just display an empty string. Now this works fine when I have only 2 sheets. But lets say I want to include a third and fourth sheet (name as 3rd and 4th respectively), then what and should I put the formula for this in the formula above? There are actually 31 sheets but if I know how to add 3rd and 4th sheet in the formula, then I can figure out how to do the rest. Thanks

    Read the article

  • SQL cluster instance names for large project

    - by Sam
    We're setting up two clusters. One dev and one prod. The Production will host two SQL instances - a OLTP and a DW. The development will host 4 OLTP non-production environments and at least one DW non-production. We're working on getting more DW non-prods and possibly more OLTP systems. I'm considering a naming scheme like this, where PROJ would be 3 initials for the project name. Dev Cluster MSSQLPROJD1\D1 (DEV) MSSQLPROJD2\D2 (TEST) MSSQLPROJD3\D3 (QA) MSSQLPROJD4\D4 (STAGE) MSSQLPROJD5\D5 (DW) Prd Cluster MSSQLPROJP1\P1 (PRD) MSSQLPROJP2\P2 (DW) To the left of the slash, each name must be unique network wide. On each server, the instance name, to the right of the slash, must be unique. Any thoughts on this? I'm trying to avoid having instance names drifting from reality as the project progresses - say we change what we call a certain environment or want to repurpose one. Then we can update a listing of the purposes for the instances and be done with it. How has a scheme like this worked out for you? Maybe you do things another way in your shop - tell me about it. Thanks.

    Read the article

  • How to transform a csv to combine matching rows?

    - by Christian Wolf
    I have a CSV file with some transaction data. Let's say date, volume, price and direction (sell/buy). Additionally there is a ID for each transaction and on each closing transaction (the newer one) there is a reference to the corresponding transaction. Classical database referencing. Now I want to do some statistics and draw some plots. This could be done via Octave, LaTeX/TikZ, Gnuplot or whatever. To do this I need both buy and sell price in one row. My thought was to preprocess the CSV to get another CSV containing the needed information and then to do the statistics. In the end I'd like to have a solution based on scripts and not on a spreadsheet as data might change often (exported from online DB). My actual solution (see http://paste.ubuntu.com/6262822/ ) is a bash script that parses the CSV line by line and checks if there exists a corresponding transaction. If found, a new row is written to the destination CSV. If not a warning is printed. The bad news: For each row in the source file I have to read the whole file a few times. This causes long running times of 10sec for 300 lines. As the line number might rise soon (10k lines), this is not perfect. I am aware, that there are many shells to be opened in the script which might cause the performance problems. Now my questions: Is bash/awk/sed/.... a good way to do things? Should I first import all data into a "real" local database to use SQL? Is there an easy way to achieve the desired results?

    Read the article

  • What are these CPU cache settings? Snoop Filter, ACL prefetch, HW prefetch

    - by eater
    I was in my BIOS setup turning on VT-x support today and saw these other settings. A little googling indicates that they each seem to turn on some sort of optimization to do with the CPU's L2 cache. They were all turned off by default. The processor in question is an Intel Xeon quad-core 3.4GHz (X5492). My OS is Linux 2.6.35.10-74.fc14.x86_64 #1 SMP Thu Dec 23 16:04:50 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux. I have 4GB of RAM if that matters. Here's what the BIOS manufacturer has to say: Snoop Filter Enabling the snoop filter typically improves performance by reducing snoop traffic on the frontside bus in dual processor configurations. Well I like the sound of improved performance. Why would the BIOS have this off by default? Or by dual processor do they not mean multi-core? Regardless, is there a downside if this is on? ACL Prefetch When enabled, the Adjacent Cache Line Prefetcher fetches both cache lines that comprise a cache line pair when it determines required data is not currently in its cache. When disabled, the processor will only fetch the cache line required by the processor. HW Prefetch Fetches an extra line of data into L2 from external memory. Both of these sound like optimizations that have some drawbacks. What are the reasons to turn them on? What are the reasons to leave them off. Why is the default off?

    Read the article

  • mkvmerge: How to merge two videos, one without audio?

    - by ProGNOMmers
    I have two videos, one without audio (the second). Trying to merge them I have this error: mkvmerge concat1.webm +concat2.webm -o output.webm mkvmerge v5.8.0 ('No Sleep / Pillow') built on Oct 19 2012 13:07:37 Automatically enabling WebM compliance mode due to output file name extension. 'concat1.webm': Using the demultiplexer for the format 'Matroska'. concat2.webm': Using the demultiplexer for the format 'Matroska'. 'concat1.webm' track 0: Using the output module for the format 'VP8'. concat2.webm' track 0: Using the output module for the format 'VP8'. concat2.webm' track 1: Using the output module for the format 'Vorbis'. No append mapping was given for the file no. 1 (concat2.webm'). A default mapping of 1:0:0:0,1:1:0:1 will be used instead. Please keep that in mind if mkvmerge aborts with an error message regarding invalid '--append-to' options. Error: The file no. 0 ('concat1.webm') does not contain a track with the ID 1, or that track is not to be copied. Therefore no track can be appended to it. The argument for '--append-to' was invalid. Is there a way to say to mkvmerge to make the audio track longer? Thank you!

    Read the article

  • plesk: how to configure reverse proxy rules properly?

    - by rvdb
    I'm trying to configure reverse proxy rules in vhost.conf. I have Apache-2.2.8 on Ubuntu-8.04, monitored by Plesk-10.4.4. What I'm trying to achieve is defining a reverse proxy rule that defers all traffic to -say- http://mydomain/tomcat/ to the Tomcat server running on port 8080. I have mod_rewrite and mod_proxy loaded in Apache. As far as I understand mod_proxy docs, entering following rules in /var/www/vhosts/mydomain/conf/vhost.conf should work: <Proxy *> Order deny,allow Allow from all </Proxy> ProxyRequests off RewriteRule ^/tomcat/(.*)$ http://mydomain:8080/$1 [P] Yet, I am getting a HTTP 500: internal server error when requesting above URL. (Note: I decided to use a rewrite rule in order to at least get some information logged.) I have made mod_rewrite log extensively, and find following entries in the logs [note: due to a limitation of max. 2 URLs in posts of new users, I have modified all following URLs so that they only contain 1 slash after http:. In case you're suspecting typos: this was done on purpose): 81.241.230.23 - - [19/Mar/2012:16:42:59 +0100] [mydomain/sid#b06ab8][rid#1024af8/initial] (2) init rewrite engine with requested uri /tomcat/testApp/ 81.241.230.23 - - [19/Mar/2012:16:42:59 +0100] [mydomain/sid#b06ab8][rid#1024af8/initial] (3) applying pattern '^/tomcat/(.*)$' to uri '/tomcat/testApp/' 81.241.230.23 - - [19/Mar/2012:16:42:59 +0100] [mydomain/sid#b06ab8][rid#1024af8/initial] (2) rewrite '/tomcat/testApp/' - 'http:/mydomain:8080/testApp/' 81.241.230.23 - - [19/Mar/2012:16:42:59 +0100] [mydomain/sid#b06ab8][rid#1024af8/initial] (2) forcing proxy-throughput with http:/mydomain:8080/testApp/ 81.241.230.23 - - [19/Mar/2012:16:42:59 +0100] [mydomain/sid#b06ab8][rid#1024af8/initial] (1) go-ahead with proxy request proxy:http:/mydomain:8080/testApp/ [OK] This suggests that the rewrite and proxy part is processed ok; still the proxied request produces a 500 error. Yet: Addressing the testApp directly via http:/mydomain:8080/testApp does work. The same setup does work on my local computer. Is there something else (Plesk-related, perhaps?) I should configure? Many thanks for any pointers! Ron

    Read the article

  • How to set up daisy-chained routers for separate sub-nets?

    - by joe
    This question seems to be similar to others, but I'll take a shot anyway. A client recently switched ISPs from TDS to Comcast Business Class. Before the switch, they had 5 static IP addresses assigned. Now they'll have a single IP address that will change whenever Comcast decides to do so. The issue is that this internet connection will be shared among two companies, both having (and wanting to keep) their own private subnets. Because TDS was supplying multiple IP addresses to the one location, this allowed me to put each router on the switch. Now, with Comcast, they only get one IP address, meaning there has to be a main router before the subnet routers. Luckily, the cable modem has a built-in router, which I would like to connect to each company's router, and still have DHCP enabled on all accounts. Question: What do I need to do to the subnet routers to keep them separate from each other, but still allow internet access from the main router. I would love to say "I tried this", and give you links, but everything I find on the internet only mentions daisy-chaining routers with DCHP disabled.

    Read the article

  • Application runs fine manually but fails as a scheduled task

    - by user42540
    I wasn't sure if this should go here or on stackoverflow. I have an application that loads some files from a network share (the input folder), extracts certain data from them and saves new files (zips them with SharpZLib) on a different network share (output folder). This application runs fine when you open it directly, but when it is set to a scheduled task, it fails in numerous places. This application is scheduled on a Win 2003 server. Let me say right off the bat, the scheduled task is set to use the same login account that I am currently logged in with, so it's not because it's using the LocalSystem account. Something else is going on here. Originally, the application was assigning a drive letter to the input folder using WNetGetConnectionA(). I don't remember why this was done, someone else on our team did that and she's gone now. I think there was some issue with using the WinZip command line with a UNC path. I switched from the WinZip command line utility to using SharpZLib because there were other issues with using the WinZip command line. Anyway, the application failed when trying to assign a drive letter with the error "connection already established." That wasn't true and even after trying WNetCancelConnection(), it still didn't work. Then I decided to just map the drive manually on the server. Then when the app calls Directory.Exists(inputFolderPath) it returns false, even though it does exist. So, for whatever reason, I cannot read this directory from within the application. I can manually navigate to this folder in Windows Explorer and open files. The app log file shows that the user executing it on the schedule is the user I expect, not LocalSystem. Any ideas?

    Read the article

  • How can I make WSUS less invasive for our users?

    - by Cypher
    We have WSUS pushing updates out to our user's workstations, and things are going relatively well with one annoying caveat: there seems to be an issue with a pop-up being displayed in front of some users informing them that their machine will be rebooted in 15 minutes, and they have nothing to say about it: This may be because they did not log out the prior night. Nevertheless, this is a bit too much and is very counter-productive for our users. Here is a bit about our environment: Our users are running Windows XP Pro and are part of an Active Directory Domain. WSUS is being applied via Group Policy. Here is a snapshot of the GPO that is enforcing the WSUS rules: Here is how I want WSUS to work (ideally - I'll take whatever can get me close): I want updates to automatically download and install every night. If a user is not logged in, I would like the machine to reboot. If a user is logged in, I would like their machine not to reboot, but instead wait until the next "installation period" where it can perform any other needed installations and reboot then (provided the a user account is not still logged in). If a user is to be prompted for reboot, it should only happen once per day (if possible), but every time they are prompted, they must have a way to postpone the reboot. I do not want users to be forced to restart their computer whenever the computer thinks it should happen (unless it's after an update installation and there are no logged in users). That doesn't seem productive to force a system restart in the midst of a person's workday. Is there something that I can do with the GPO that would help make WSUS less intrusive? Even if it gave the user an option to Restart Later - that would be better than what is happening now.

    Read the article

  • Making "default saved" work with GRUB2...?

    - by baltusaj
    I just installed Moblin Operating System. It's using GRUB2. On my Ubuntu 8.04 GRUB 0.97 was being used in which i was using the default saved option comfortably. I found that with GRUB2 i should not edit /boot/grub/menu.lst directly but I did :) because my Moblin does not contain any /etc/default/grub where they say I should do the modification I want. So what I did is as following which did not work: default=saved timeout=1 #splashimage=(hd0,0)/boot/grub/splash.xpm.gz #hiddenmenu #silent title Moblin (2.6.31.5-10.1.moblin2-netbook) root (hd0,0) kernel /boot/vmlinuz-2.6.31.5-10.1.moblin2-netbook ro root=/dev/sda1 vga=current savedefault=1 title Pathetic Windows rootnoverify (hd0,1) chainloader +1 savedefault=0 By doing so I should have automatically switch between Moblin and Window at each boot but it's not working. Almost all the troubleshooters on internet are saying that I should enable the DEFAULT=save option in /etc/default/grub but I am unable to find this file. Any idea what else should I do? Thanks a lot Update: I used the equal to sign because by default my menu.lst had an entry as default=0. However, default 0, is also working fine. Moreover the menu.lst, i have is actually a symbolic link to ./grub.conf. I have also noticed that grub-intall and grub-set-default commands are not working.

    Read the article

  • How should I perform database maintenance on a 24x7 system

    - by solublefish
    I'm a software developer who inherited a part-time DBA role. I'm responsible for an application backed by a small, high-volume 24x7 database on SQL Server 2008. While there's other stuff in the DB, the critical piece is a 50GB, 7.5M row table that serves 100K requests/sec during peak load, and about half that at "night". This is 99%+ read traffic, but the writes are constant, and required. I need to be able to perform periodic maintenance without a maintenance window. Say an index rebuild, a job to purge old data, Windows Update, or hardware upgrade. Most of the advice I've seen is along the lines of "MAKE a maintenance window." While I appreciate the sentiment, I hope there's another way. If it will solve this problem, I do have the ability to purchase new hardware or modify the database, the clients (a set of web services servers), and much of the application code (ADO.NET + ASP.NET). I've been thinking along the lines of using the warm spare (or a 3rd server) to do the maintenance, and then "swap" it into production. 1 Synchronize the spare by restoring backups, including a current transaction log. 2 Perform the maintenance tasks. 3 Reconfigure clients to connect to the spare server. Existing connections are finished within a minute or so. 4 The spare server is now the production server. The problem remaining is that the new production server is now out of date by however long it took to perform maintenance. Is there some way that the original production server can be made to queue up changes and merge them to the spare between steps 2 and 3? Any other ideas?

    Read the article

  • FreeNAS pool configuration - RAID1 + other drives

    - by trnelson
    Simple questions, really. I found this answer with a similar setup, but not sure it answers my question. If it does, I'm curious why since the answer seems a bit unsure: ZFS Hard Drive Configuration in FreeNAS I'm building a server which will be used primarily for backup, plus some media streaming, possibly with Plex. I seem to understand most everything I need, but I'm still a bit confused on how pools work, and how to configure them for my scenario. I will have 2x 2TB WD Red drives, which I plan on using in a mirrored set up (RAID1). This would be for backup, and I'd also like to do offsite backup to my CrashPlan account from this array. I also have a few other drives: 1.5TB, 320GB, 250GB. I'm not sure exactly what to do with them yet, but looking for options. FreeNAS OS will be running from a 16GB USB Flash drive. Would it be wise to use the 1.5TB as a backup-backup, essentially as a mirror or perhaps for snapshots of the 2TB RAID1? I'm still learning about snapshots. Should the 2TB mirrored drives be in their own pool? Should the other drives be set up in their own pools as well, or should they be JBOD in a single pool? They may or may not get much use since the 2TB array is plenty for me. Does a dataset basically mimic the idea of a partition or a network share? In other words, I would map \SERVER\Share to X: on my laptop? Let's say I wanted to use the 250GB drive as an encrypted drive to store all of my cat pictures. Would it have to be in its own pool? If I use jails apps, should they go in the backup RAID1, or in another place? Thank you!

    Read the article

  • Access server using IP on another interface

    - by Markos
    I am using Windows Server 2012 instead of a router for my home network. Currently I am using RRAS and computers from local network can access Internet correctly. Here is a map of the current setup: [PC1] ---| |---- (lan ip)[Server](wan ip)--> internet [PC2] ---| I have applications running on Server, such as IIS and others. All can be accessed from internet using wan ip and from lan using lan ip. I have a domain, lets say its my-domain.com, which is resolved to my wan ip. What I want is to enable my LAN computers to be able to connect to services on my server using the very same address as internet users: eg http://my-domain.com/. However this does not work for my lan computers. What I understand is that I need to set up some kind of loopback route in a way that packets comming to LAN interface get routed to WAN interface. But I haven't found how to achieve this (in fact, I don't know WHAT to search for). Feel free to ask for additional informations and I will try to update the question.

    Read the article

  • Wireless card overheating?

    - by Sidney
    Ok, so I've had my laptop for several years (I wanna say 4, but possibly more), it's a Toshiba Satellite. I'm running Linux mint 15, and am having a strange new issue, after several hours of running my wireless stops. It can SEE wireless networks, but refuses to connect to any of them. (On a sidenote, connecting to a router with a cable at this point works fine) The fact that it can SEE the networks make me think the card is in good condition, and it's software related The fact that it works for several hours before booting me makes me think perhaps the transmitter is getting too hot. I don't use my laptop in dusty environments, and keep it on an elevated surface (alternatively, I actively try not to let it sit on soft surfaces where the vents get covered). I spray out the cpu fan about once a year with compressed air about once a year, so I really don't think the insides should be too dirty. Finally, unfortunately, sensors only gives me CPU temps, but they run about 40-50 degrees C, which from my understanding is perfectly normal for an I3. Does anyone have any suggestions on what I can do to determine the root cause of this?

    Read the article

  • Use subpath internal proxy for subdomains, but redirect external clients if they ask for that subpath?

    - by HostileFork
    I have a VirtualHost that I'd like to have several subdomains on. (For the sake of clarity, let's say my domain is example.com and I'm just trying to get started by making foo.example.com work, and build from there.) The simplest way I found for a subdomain to work non-invasively with the framework I have was to proxy to a sub-path via mod_rewrite. Thus paths would appear in the client's URL bar as http://foo.example.com/(whatever) while they'd actually be served http://foo.example.com/foo/(whatever) under the hood. I've managed to do that inside my VirtualHost config file like this: ServerAlias *.example.com RewriteEngine on RewriteCond %{HTTP_HOST} ^foo\.example\.com [NC] # <--- RewriteCond %{REQUEST_URI} !^/foo/.*$ [NC] # AND is implicit with above RewriteRule ^/(.*)$ /foo/$1 [PT] (Note: It was surprisingly hard to find that particular working combination. Specifically, the [PT] seemed to be necessary on the RewriteRule. I could not get it to work with examples I saw elsewhere like [L] or trying just [P]. It would either not show anything or get in loops. Also some browsers seemed to cache the response pages for the bad loops once they got one... a page reload after fixing it wouldn't show it was working! Feedback welcome—in any case—if this part can be done better.) Now I'd like to make what http://foo.example.com/foo/(whatever) provides depend on who asked. If the request came from outside, I'd like the client to be permanently redirected by Apache so they get the URL http://foo.example.com/(whatever) in their browser. If it came internally from the mod_rewrite, I want the request to be handled by the web framework...which is unaware of subdomains. Is something like that possible?

    Read the article

  • Slow Local Network, Windows 7, Snow Leopard, WiFi/Wired

    - by WerkkreW
    I am experiencing really poor local network performance in my home. I was recently using a Linksys WRT54G Router with DD-WRT on it, and a couple comparable Linksys-G PCI cards for connectivity but decided to upgrade hoping it would help with my performance issues. The computers in my house are connected as follows: Comcast Business Class Commercial 25mbps/10mbps (Verified) D-Link DGL-4500 Wireless N Router Windows 7x64 - D-Link DWA-552 Wireless-N Windows 7x64 - D-Link DWA-552 Wireless-N Mac Mini 10.6.2 - AirPort Extreme N Playstation 3, Hard Wired Xbox 360, Hard Wired Essentially the problem is very specific. Web browsing and uploading/downloading files from the internet is fine, more than fine. But if I want to say, Stream a video from one of my Windows 7 computers to my PS3, or copy a large video file between either of the PC's or the Mac, I get a consistent 500-900Kbps throughput at the high end. If I open my network browser, or try to browse my homegroup the response time is horrible. Both of my Windows computers are showing Strong wireless signals with a connection speed of 300Mbps. I know I can never expect to achieve anything near those speeds, but 500Kbps? Here is what I have tried so far: Enabled Single mode N-only and N/G Only on router WPA2 with AES Encrpytion Disabled "Remote Differential Compression" in Windows 7 Disabled TCP "Auto-Tuning" Used other software for file copies such as "Teracopy" I am at the end of my rope. Unfortunately I live in a 75 year old home with plaster walls, so hard-wiring my entire house isn't really an option I can handle right now. Any ideas to help me get decent speed when transferring files across my network would be greatly appreciated.

    Read the article

  • Difference between CurrentClockSpeed and MaxClockSpeed

    - by Ben
    Rationale this belongs on ServerFault rather than StackOverflow - I already have my program which gets the value, I am querying the value returned and what it means. I have an in-house program which audits our company PCs, and one of the things it checks is the speed of the processor. To do this, it queries the Win32_Processor WMI class and gets the value of CurrentClockSpeed. We were playing with the data today and found an anomaly with some of the speeds being reported incorrectly (for example, CurrentClockSpeed said 1.0GHz, whereas the CPU name said Intel(R) Core(TM)2 CPU T5600 @ 1.83GHz [Confirmed it is in fact 1.83GHz]). I did a bit of digging on the internet and found this blog post which might explain what is going on. My initial thought was that I could change the program to instead get the value for MaxClockSpeed instead of CurrentClockSpeed, but Microsoft's documentation doesn't clearly define what this will return. What I mean by that is will this return a value which is its actual maximum speed (say if it were overclocked) but which it would not normally be running at, or would it return what I expect, which is its maximum speed under normal (not overclocked) conditions?

    Read the article

  • Apache Virtual Host with directory aliases

    - by brechtvhb
    I'm trying to set up a dynamic virtual host in apache with a directory alias pointing to a difirent path for every domain. Here's what I'm trying to achive. Say I have 2 domains: * www.domain1.com * www.domein2.com I want both to point to the same index.php file (C:/cms/index.php). Now the hard part ... I want directories or certain file types to point to a diffirent path for each domain. Example: * www.domain1.com/layout -> C:/store/www.domain1.com/layout * www.domain2.com/layout -> C:/store/www.domain2.com/layout * www.domain1.com/image.png -> C:/store/www.domain1.com/image.png * www.domain2.com/image.png -> C:/store/www.domain2.com/image.png However the admin directory should point to the same path again for all sites * www.domain1.com/admin -> C:/cms/admin * www.domain2.com/admin -> C:/cms/admin Is there a way to achieve this kind of behaviour in apache 2.2 without having to create a virtualhost entry for each new domain?

    Read the article

  • Lion MacBook Pro will not load webpages with DNS just after wake

    - by NReilingh
    I'm working with a 2011 MacBook Pro running Lion (10.7.2), that after waking from sleep (i.e. opening the lid) takes an inordinately long amount of time (2-3 minutes or more) to get a usable internet connection. Upon waking, the wi-fi icon signifies it is negotiating a network connection, and completes one a few seconds later. At this point, network diagnostics will not show any issues, and everything in Network preferences looks as normal: I'm connected to the proper network, have the right IP address and gateway, and DNS settings are correct. However, any site accessed with a domain name (like http://www.google.com) in Safari will return the "You are not connected to the Internet." error. Accessing a site directly, say, with Google's 74.125.226.212, is successful. Yet, Network Diagnostics will insist that DNS is functioning properly. After a few minutes, the following lines will be printed to the Console log, and regular behavior will be restored. 11/18/11 8:11:31.288 PM airportd: _doAutoJoin: Already associated to “Wireless”. Bailing on auto-join. 11/18/11 8:11:32.000 PM kernel: en1: BSSID changed to 00:25:9c:63:91:bd This behavior occurs only when waking from sleep--not when turning wi-fi on and off. This problem also occurs when using a wired Ethernet connection. As per this thread, I have tried flushing the DNS cache and wiping the wireless network from memory (it's not a protected network). Neither have worked.

    Read the article

< Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >