Search Results

Search found 26004 results on 1041 pages for 'debian based'.

Page 832/1041 | < Previous Page | 828 829 830 831 832 833 834 835 836 837 838 839  | Next Page >

  • How to avoid en.voyages-sncf.com redirecting to uk.voyages-sncf.com?

    - by Mark Smith
    OK, so en.voyages-sncf.com is French Railways' English language website with full functionality for train booking in France - it sells iDTGV, offers seating options etc. uk.voyages-sncf.com is their UK subsidiary, with reduced functionality, no seat options, no iDTGV etc. Previously, I have been able to select 'Other countries (EUR)' top right and go from the uk version to the en version, or just type in the direct url 'en.voyages-sncf.com and go there. Now, they seem to have implemented an automatic redirect so whenever I enter 'en.voyages-sncf.com' on my UK-based PC or indeed try to select 'Other countries (EUR)' it automatically bumps me to uk.voyages-sncf.com, which I don't want. I can't get onto en.voyages-sncf at all. So, short of using a heavyweight solution like using a non-UK proxy server or downloading the TOR browser, is there any simple solution? Like telling my browser to go to en.voyages-sncf, go directly to en.voyages-sncf and no other site, do not pass go, do not collect £200, do not go anywhere else, ignore all redirects and do what you're told by ME, not by those Machiavellian so-and-sos?

    Read the article

  • Identifying Exchange 2010 regular process that is walking the mailbox database

    - by toongeneral
    I have an Exchange 2010 server running on a SAN-backed platform. The platform does block-level backups based on a snapshot/incremental basis, that only capture changed data. I was surprised to see a regular period of time where the data changes were happening at a high, sustained rate. Due to the way this system works, that can lead to 1.2TB of stored data per month. The regularity implied a scheduled task, but it is not a fixed interval. It is approximately every 26-32hrs. The disks were performing read operations of ~5MB/s and write operations of ~4.5MB/s, for a period of 3-4hrs. The total written data was ~55-60GB. Reading on TechNet, I am wondering if the following is causing this: http://blogs.technet.com/b/exchange/archive/2011/12/14/database-maintenance-in-exchange-2010.aspx#checksumming The somewhat restrictive thing is that the process only happens at most once every 24 hours. I was able to investigate while it was running, finding the following: the process is store.exe it is working on the mailbox database files while running, it is generating .log files (in the mailbox database folder) consistent with database changes the mailbox database is ~60GB in size, which fits with the total data changes on each iteration I have currently switched to a fixed maintenance window, as a test. It's not clear whether this is the cause, as the symptoms fit, but are not conclusive. Does anyone have any suggestions for additional troubleshooting?

    Read the article

  • Google Apps routing to different servers, depending on domain

    - by Philip
    We are investigating Google Apps for Education for our group of schools. Currently, each school uses their own Exchange (2003) server. Each school has its own domain which I have added to Google Apps as additional domains. I would like to start transitioning certain staff and some new pupils over to Google Apps to start testing. In this interim phase, I need mail to be routed through Google Apps and then, if no appropriate mail box is found, route on to the individual schools depending on the recipient. I do know that it is possible to route mail that does not have an appropriate Google Apps mail account to a single server - under "Settings / E-mail Settings / General Settings / Routing / E-mail routing". This works well for a single organisation where all the extra mail is destined for one place. I do know that it is possible to set up Routes, under "Settings / E-mail Settings / Hosts" and then use rules, found under "Settigns / E-mail Settings / General Settings / Routing / Receiving Routing". I can then filter based on e-mail domain and forward on to the necessary server. My problem with this, as I understand it, is that it ignores the users that have Google Apps accounts set up and sends all mail to the Exchange server. Are there any solutions for this predicament? Many thanks!

    Read the article

  • How to get Subversion repository from svn:// and https://?

    - by Hikari
    I know these are noob questions, but I never got my own Subversion running before and I'm kinda lost. I installed VisualSVN in Windows, but it doesn't support svn:// protocol by default, only HTTP or HTTPS. It is working fine over HTTP, and I'm able to manage it from its management tool, see its repositories and get their HTTP-based URL, and from that I'm able to use Tortoise to check out and check in. I'm able to check out from a repository URL using Tortoise: http://Main:90/svn/HikariKrumo/ But I need svn:// protocol for Redmine to access it. Redmine says to support http:// but it reports this error message: The entry or revision was not found in the repository.. And I need HTTPS to access it from Internet. If I can get Redmine to access it from svn:// I can just configure it to use HTTPS in place of HTTP, and I hope it all to works. I like VisualSVN because of its management tool, but I can use another Subversion distro if needed, as long as it supports svn:// and https://. I'm getting crazy on it because it should be simple but I can't get it to work.

    Read the article

  • Why do I get swap space related errors when I still have lots of free memory in Solaris 10?

    - by Tom Duckering
    I am seeing a few of my services suffering/crashing with errors along the lines of "Error allocating memory" or "Can't create new process" etc. I'm slightly confused by this since logs show that at the time the system has lots of free memory (around 26GB in one case) of memory available and is not particularly stressed in any other way. After noting a JVM crash with similar error with the added query of "Out of swap space?" it made me dig a little deeper. It turns out that someone has configured our zone with a 2GB swap file. Our zone doesn't have capped memory and currently has access to as much of the 128GB of the RAM as it need. Our SAs are planning to cap this at 32GB when they get the chance. My current thinking is that whilst there is memory aplenty for the OS to allocate, the swap space seems grossly undersized (based on other answers here). It seems as though Solaris is wanting to make sure there's enough swap space in case things have to swap out (i.e. it's reserving the swap space). Is this thinking right or is there some other reason that I get memory allocation errors with this large amount of memory free and seemingly undersized swap space?

    Read the article

  • Unexpected behaviour in a Lotus Notes programmable table

    - by Mark B
    I'm designing a workflow database in Lotus Notes 6.0.3 (soon upgrading to 8.5), and my OS is Windows XP. I have recently tried converting a tabbed table into a programmable one. This was so that I could control which tab was displayed to the user when it was opened, so that they were presented with the most appropriate one for that document's progress through the workflow. That part of it works! One of the tabs features a radio button that controls visibility of the next tab, and a pair of cascading dialogue boxes. One contains the static list "Person":"Team", and the other has a formula based on the first: view:=@If(PeerReview = "Team"; "GroupNames"; "GroupMembers"); @Unique(@DbColumn(""; ""; view; 1)) The dialogue boxes have the property "Refresh fields on keyword change" selected. The behaviour that I wasn't expecting is this. If the radio button is set to "Yes" and a value is selected in one of the dialogue boxes, the table opens the next tab. If the radio button is set to "No" and a value is selected in one of the dialogue boxes, the entire table is hidden. I can duplicate the latter by switching off the "Refresh fields on keyword change" property on the dialogue boxes and instead pressing F9 after selecting a value. I have no idea why the former occurs, though. The table is called "RFCInfo", and I have a field on the form called "$RFCInfo" which is editable, hidden from all users who aren't me and initially set by a Postopen script, which I can post if necessary - it's essentially a Select Case statement that looks at a particular item value and returns the name of the table row relating to that value. Can anyone offer any pointers?

    Read the article

  • Is there such a thing as a file hosted container which deduplicates data held within?

    - by Mallow
    Background I have backups of a website which stores all of it's data into a single file. This file is several gigs large and I have many different backups of this file. Most of the data within is mostly the same plus whatever was added or changed to it. I want to keep all the concurrent backups I've made through the years in case I find a horrible surprise of data corruption along the line. However storing a 10gig file every month gets expensive. Seeking Solution I've often thought about different ways of alleviating this problem. One thought that comes up very often combines the idea of a duplicating file system which doesn't require it's own partitioned volume on a hard drive. Something like what truecrypt does, what it calls, "file hosted containers" which when using the truecrypt program allows you to mount and dismount that volume as a regular hard drive. Question Is there a virtual hard drive mounter which uses file-based container which uses data deduplicaiton file system? (This question is a little awkward to put into words, if you have a better idea on how to ask this question please feel free to help out.)

    Read the article

  • Windows 7: moved system partition, need to update boot partition

    - by Actorclavilis
    So, I have a decently standard Windows7/Ubuntu dual-boot setup, and (since Ubuntu is my usual operating system) I found I needed to grow my Ubuntu partition and shrink my W7 partition. Originally, my system (500G) looked like this: W7 Boot Partition (1.5G) Ubuntu (around 240G) W7 (same as Ubuntu) (on an extended partition, all by itself) Swap (rest of disk, around 16G) Now I'm no stranger to partitioning and filesystem tools, especially GParted, which I used on a Linux boot disk. After my partition editing, the partitions are laid out the same, except the Ubuntu partition is now 407G and the W7 partition is smaller to compensate. I had supposed, based on http://www.gparted.org/faq.php, that I would be able to run the W7 install disk in recovery mode and have it deal with the rearrangement, then possibly reinstall GRUB or something. Well, now the W7 install disk doesn't even see my W7 installation. All my files are there, the NTFS is perfectly clean, no problems there, but the install disk won't notice it. (Of course, the GRUB entry works fine but the W7 boot partition (which I didn't change) refuses to boot it.) So, basically, any ideas on how to fix this? I don't especially want to rerun the entire install procedure because I'll have a bunch of programs to reinstall (never mind redoing GRUB), but I fear that might be the only option. Thanks.

    Read the article

  • Anyone have a script to delete a specific local windows profile?

    - by Jordan Weinstein
    I'm looking for Powershell (preferred) script, or .CMD or .VBS, to delete a specific user profile on a workstation (WinXP) or terminal server (2000, '03 or '08). I know all about the delprof utility... That only allows you delete based on a period of inactivity. I want a script to: prompt admin for a username delete that username's profile and to delete the entire profile - registry hive too (not just the folder structure within Documents and Settings). The same way it would if you went to My Computer Properties Advanced tab User Profiles Settings and deleted profiles from there. Any ideas? All I can think of is doing an AD lookup to get the SID of the user specified, then using that to delete the correct registry hive too... something simpler would be nice though... Basically, my HelpDesk used to be local administrators on our Citrix servers and a common fix for various issues was for them to delete a user's profile on the citrix server(s) and have that user log back in - voila, whatever issue they had was resolved. Going forward, in new Citrix environment, they will no longer be local admins on those boxes, but still need to be able to delete profiles (deleting the entire profile: folder and reg hive is key). thanks.

    Read the article

  • Why does my ping command (Windows) results alternate between "timeout" and "network is not reachable"?

    - by Sopalajo de Arrierez
    My Windows is in Spanish, so I will have to paste console outputs in that language (I think that translating without knowing the exact terms used in english versions could give worse results than leaving it as it appears on screen). This is the issue: when pinging a non-existent IP from a WinXP-SP3 machine (clean Windows install, just formatted), I get sometimes a "Timeout" result, and sometimes a "network is not reachable" message. This is the result of: ping 192.168.210.1 Haciendo ping a 192.168.210.1 con 32 bytes de datos: Tiempo de espera agotado para esta solicitud. Respuesta desde 80.58.67.86: Red de destino inaccesible. Respuesta desde 80.58.67.86: Red de destino inaccesible. Tiempo de espera agotado para esta solicitud. Estadísticas de ping para 192.168.210.1: Paquetes: enviados = 4, recibidos = 2, perdidos = 2 (50% perdidos), Tiempos aproximados de ida y vuelta en milisegundos: Mínimo = 0ms, Máximo = 0ms, Media = 0ms 192.168.210.1 does not exist on the network. DHCP client is enabled, and the computer gets assigned those network config by the router. My IP: 192.168.11.2 Netmask: 255.255.255.0 Gateway: 192.168.11.1 DNS: 80.58.0.33/194.224.52.36 This is the output from "route print command": =========================================================================== Rutas activas: Destino de red Máscara de red Puerta de acceso Interfaz Métrica 0.0.0.0 0.0.0.0 192.168.11.1 192.168.11.2 20 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 192.168.11.0 255.255.255.0 192.168.11.2 192.168.11.2 20 192.168.11.2 255.255.255.255 127.0.0.1 127.0.0.1 20 192.168.11.255 255.255.255.255 192.168.11.2 192.168.11.2 20 224.0.0.0 240.0.0.0 192.168.11.2 192.168.11.2 20 255.255.255.255 255.255.255.255 192.168.11.2 192.168.11.2 1 255.255.255.255 255.255.255.255 192.168.11.2 3 1 Puerta de enlace predeterminada: 192.168.11.1 =========================================================================== Rutas persistentes: ninguno The output of: ping 1.1.1.1 Haciendo ping a 1.1.1.1 con 32 bytes de datos: Tiempo de espera agotado para esta solicitud. Tiempo de espera agotado para esta solicitud. Tiempo de espera agotado para esta solicitud. Tiempo de espera agotado para esta solicitud. Estadísticas de ping para 1.1.1.1: Paquetes: enviados = 4, recibidos = 0, perdidos = 4 1.1.1.1 does not exist on the network. and the output of: ping 10.1.1.1 Haciendo ping a 10.1.1.1 con 32 bytes de datos: Respuesta desde 80.58.67.86: Red de destino inaccesible. Tiempo de espera agotado para esta solicitud. Tiempo de espera agotado para esta solicitud. Respuesta desde 80.58.67.86: Red de destino inaccesible. Estadísticas de ping para 10.1.1.1: Paquetes: enviados = 4, recibidos = 2, perdidos = 2 (50% perdidos), 10.1.1.1 does not exist on the network. I can do some aproximate translation of what you demand if necessary. I have another computers in the same network (WinXP-SP3 and Win7-SP1), and they have, too, this problem. Gateway (Router): Buffalo WHR-HP-GN (official Buffalo firmware, not DD-WRT). I have some Linux (Debian/Kali) machine in my network, so I tested things on it: ping 192.168.210.1 PING 192.168.210.1 (192.168.210.1) 56(84) bytes of data. From 80.58.67.86 icmp_seq=1 Packet filtered From 80.58.67.86 icmp_seq=2 Packet filtered From 80.58.67.86 icmp_seq=3 Packet filtered From 80.58.67.86 icmp_seq=4 Packet filtered to the non-existing 1.1.1.1 : ping 1.1.1.1 PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data. ^C --- 1.1.1.1 ping statistics --- 153 packets transmitted, 0 received, 100% packet loss, time 153215ms (no response after waiting a few minutes). and the non-existing 10.1.1.1: ping 10.1.1.1 PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data. From 80.58.67.86 icmp_seq=20 Packet filtered From 80.58.67.86 icmp_seq=22 Packet filtered From 80.58.67.86 icmp_seq=23 Packet filtered From 80.58.67.86 icmp_seq=24 Packet filtered From 80.58.67.86 icmp_seq=25 Packet filtered What is going on here? I am posing this question mainly for learning purposes, but there is another reason: when all pings are returning "timeout", it creates an %ERRORLEVEL% value of 1, but if there is someone of "Network is not reachable" type, %ERRORLEVEL% goes to 0 (no error), and this could be inappropriate for a shell script (we can not use ping to detect, for example, if the network is down due to loss of contact with the gateway).

    Read the article

  • Performance-optimizing Oracle 10g on a server that is also a Tomcat JSP app server?

    - by PKHunter
    I have inherited a simple RedHat 5 - 64bit platform. It has SCSI disks on RAID1, with 16GB of RAM. Double Core CPU. Oracle 10g, Release 2. This would be a decent platform for running the DB only, perhaps, but the same server in an "A-A mode" clustering (very simple) also runs Tomcat and there are several Java servlets running on this. Sadly there is no caching platform etc. We only use an external CDN for some html caching. I am personally more familiar with web environments on the LAMPP platform (apache, php, mysql, postgresql). PROBLEM: Because the server has both Tomcat JSP/Java and Oracle 10g running on the same server, with no caching, I have some issues of the server going down. Often, sadly. QUESTION: What are my options in terms of improving performance of all these different apps? Connection Pooling? Example, in Postgresql world we have PgBouncer, which really helps things. Does Oracle have something similar? Or is there a famous Java-based external pooler that people use in production environments? (I'm not familiar with Java) Any "SQL cache" as in the MySQL and Postgresql world? Any other kind of application cache, as "APC" or "eAccelarator" in the PHP world? The "OSCache" stuff from the Java world (JSP thingie I found on Google: http://onjava.com/pub/a/onjava/2005/01/05/jspcache.html?page=2) ... What else? Sorry if this is a noob question. I have googled and googled, but problem is I don't know what to google for, other than the broad general concepts above. So if not full answers, I would even appreciate basic pointers and I am happy to JFGI myself. Thanks!

    Read the article

  • Most secure way to access my home Linux server while I am on the road? Specialized solution wanted

    - by Ace Paus
    I think many people may be in my situation. I travel on business with a laptop. And I need secure access to files from the office (which in my case is my home). The short version of my question: How can I make SSH/SFTP really secure when only one person needs to connect to the server from one laptop? In this situation, what special steps would make it almost impossible for anyone else to get online access to the server? A lot more details: I use Ubuntu Linux on both my laptop (KDE) and my home/office server. Connectivity is not a problem. I can tether to my phone's connection if needed. I need access to a large number of files (around 300 GB). I don't need all of them at once, but I don't know in advance which files I might need. These files contain confidential client info and personal info such as credit card numbers, so they must be secure. Given this, I don't want store all these files on Dropbox or Amazon AWS, or similar. I couldn't justify that cost anyway (Dropbox don't even publish prices for plans above 100 GB, and security is a concern). However, I am willing to spend some money on a proper solution. A VPN service, for example, might be part of the solution? Or other commercial services? I've heard about PogoPlug, but I don't know if there is a similar service that might address my security concerns? I could copy all my files to my laptop because it has the space. But then I have to sync between my home computer and my laptop and I found in the past that I'm not very good about doing this. And if my laptop is lost or stolen, my data would be on it. The laptop drive is an SSD and encryption solutions for SSD drives are not good. Therefore, it seems best to keep all my data on my Linux file server (which is safe at home). Is that a reasonable conclusion, or is anything connected to the Internet such a risk that I should just copy the data to the laptop (and maybe replace the SSD with an HDD, which reduces battery life and performance)? I view the risks of losing a laptop to be higher. I am not an obvious hacking target online. My home broadband is cable Internet, and it seems very reliable. So I want to know the best (reasonable) way to securely access my data (from my laptop) while on the road. I only need to access it from this one computer, although I may connect from either my phone's 3G/4G or via WiFi or some client's broadband, etc. So I won't know in advance which IP address I'll have. I am leaning toward a solution based on SSH and SFTP (or similar). SSH/SFTP would provided about all the functionality I anticipate needing. I would like to use SFTP and Dolphin to browse and download files. I'll use SSH and the terminal for anything else. My Linux file server is set up with OpenSSH. I think I have SSH relatively secured. I'm using Denyhosts too. But I want to go several steps further. I want to get the chances that anyone can get into my server as close to zero as possible while still allowing me to get access from the road. I'm not a sysadmin or programmer or real "superuser". I have to spend most of my time doing other things. I've heard about "port knocking" but I have never used it and I don't know how to implement it (although I'm willing to learn). I have already read a number of articles with titles such as: Top 20 OpenSSH Server Best Security Practices 20 Linux Server Hardening Security Tips Debian Linux Stop SSH User Hacking / Cracking Attacks with DenyHosts Software more... I have not implemented every single thing I've read about. I probably can't do that. But maybe there is something even better I can do in my situation because I only need access from a single laptop. I'm just one user. My server does not need to be accessible to the general public. Given all these facts, I'm hoping I can get some suggestions here that are within my capability to implement and that leverage these facts to create a great deal better security than general purpose suggestions in the articles above.

    Read the article

  • Centralized backup method recommendation for SMEs with various OSes

    - by Akinator
    Hi I was wondering what in your opinion is the "best" method for having "everything" backed-up in the following situation. We are a SMEs with 10 computers in total. Three of those computers are MACs The rest are windows (1 vista, 4 win7 and 2 XPs) I'm very open to what the method should be but you should also consider the follwing: Very limited resources Quite "small" bandwidth (4 MBs for all (download) 0.4 MBs (upload, yep, thats it)- though this might get, a little bit better) One of the main thing to back up would be the mails, considerations: All windows computers use outlook, mainly 2003 There is one mac that uses outlook too (for mac of course - not 2011 yet) We also have to backup the files: Not a huge amount Very few very big files Very organizes (by machine) What I would like is to hear your opinions as to which would be the best method (or combination of methods - preferably one of course) considering. We are not sure what do we need and I'm open to suggestions, though an online (cloud based applications) would be great, remember the the bandwidth is unbearable. Last think to consider, it that we would like to do weekly updates (unless the method is very easy of course). Thanks in advance!! I tried to be as specific as possible, but if anything is needed I'll gladly update, please ask for any clarification needed! Please avoid any answers like upgrade all to windows 7 and throw away your macs :) our's may not be an ideal situation, but it is what it is, and right now, it would be impossible for us to change it for a lot of circumstances.

    Read the article

  • Identifying test machines in analytics logs

    - by RTigger
    We're just beginning to add analytics to our SaaS application, to begin (among other things) billing clients based on usage. The problem we're running into is there's a few circumstances where our support team will simulate a log in into production to try to reproduce reported issues with a client's configuration. When they log in, an entry will be made into our analytics logs that their specific account has logged in, which we use to calculate billing. A few ideas we had to solve this: 1) We log IP addresses as well as machine keys for each PC that logs in - we could filter out known IP addresses and/or machine keys belonging to support. The drawback is we have to maintain a list of keys / addresses manually. 2) If support (or anyone else internal) runs our application in debug mode (as opposed to release), it will not report analytics. This is fine, as long as support / anyone else remembers to switch to debug mode. 3) Include some sort of reg key / similar setting required to be set when configuring a production system in order to send analytics. Again, fine, as long as our infrastructure team remembers to set the reg key or setting. All of these approaches require some sort of human involvement, which we all know can be iffy at best. Has anyone run into a similar situation? Is there an automated approach to this problem? (PS Of course, we shouldn't be testing in production, but there are a few one-off instances with customer set up that we can't reproduce without logging in as them in production. This is the only time we do so, and this is the case I'm talking about in this question.)

    Read the article

  • SSO to multiple websites from Sharepoint website

    - by Aico
    We have an intranet based on Sharepoint 2010. In this intranet we have several links to other webservers within the same Active Directory, for example a link to our Outlook Web Access site on our Exchange 2010 environment. We have three different setups which visit this Sharepoint environment and the other webservers: Windows 7 clients that are a member of the Active Directory Home pc's that connect through a SSL VPN appliance Standalone thin clients (Windows 7 embedded) within the corporate network The goal is to let people only sign in once. In the first group this isn't a problem because the AD Integrated Authentication works fine and the Windows logon is passed on to Sharepoint and the other webservers. The second group is also working fine because of the LDAP integration that the SSL VPN appliance uses. The third group is however experiencing issues. They need to enter their credentials everytime they click a link to another webserver. They first need to enter credentials for accessing the Sharepoint environment. When clicking the link for their webmail they have to re-enter their credentials, and so on. Can someone tell me what the best solution would be to also get SSO working fine for the third group? Some extra information: We also have a Forefront TMG server in our environment. I read somewhere that Forefront might be part of a solution for this problem, but not sure how. Maybe someone here can help me? Look forward to some help. Best regards, Aico

    Read the article

  • VirtualBox bridged network not working as expected

    - by iby chenko
    I am having hard time getting Bridged network to work with VirtualBox. Idea is to have host as well as one or more guests on same LAN. Using NAT (default) I do get access to internet and any node on the LAN when working from one of the VM guests. However, no LAN node including host can access (or ping) guest in VM. I need to be able to use any guest as if it was a physical computer on the network (need to be accessed by any machine on LAN). According to my understanding of the VirtualBox documentation, this should be Bridged mode. I think I set it correctly, well, actually there is not much to it: 1. select Bridged mode in VM network setup 2. select physical NIC of the host to connect bridge to 3. start VM When I do this, each VM does get new IP address that corresponds to LAN settings : 192.168.1.100 192.168.1.102 192.168.1.103 etc. where host is 192.168.1.80 / 255.255.255.0 (IP addresses above 100 are served by DHCP server). This seem to be correct based on what I know about ethernet. From VM I can ping other nodes like 192.168.1.50 etc. and I still get ethernet access. So far so good... But I STILL cannot ping any of the other VMs (running ones of course). I cannot ping them from other VMs, from host or from other nodes on the LAN. Aside from fact that IP addresses handed to guests are now local, this still acts same as NAT. What is going on? What am I missing? Regards, I

    Read the article

  • Query specific nameserver for a particular domain upon VPN connect

    - by MT
    Some background: I have a work laptop with Ubuntu 9.10 on it. I have a small network at home where I've been running some basic services (for myself/my family) for 10 some years. In my home network there is a nameserver (Fedora) running Bind 9 with two "views". One view is the "outside" view and it provides name resolution (to the Internet at large) for email, a wiki, and a couple of blogs. The "inside" view provides name resolution (to the internal RFC1918 addresses of theses servers) as well as all the inside hosts, network equipment, ...etc. I connect with an openvpn client to my home network from outside (such as work). What I'd like to be able to do is resolve names on my internal network across this VPN (so I get the RFC1918 "inside" responses) without fully changing my resolver to the DNS server at my hose. For example, if I connect to the VPN from work, I can change my resolver (by editing resolv.conf) to the DNS server at my house (across the VPN) and then successfully resolve all of the inside DNS names on my home network. The issue I have with this is that now I'm no longer able to resolve "inside" names provided by my work's DNS servers (because I'm using my home DNS server). Alternatively, I can connect to the VPN and access my home severs via IP addresses directly, but this is inconvenient and causes issues with Apache name-based hosting (among other things). In the end, the effect I'm trying to achieve is as follows: When I connect to the VPN I automatically start sending DNS requests for *.myhomedomain.com to my home nameserver, but any other requests continue to go the the nameserver I was using before (the one I received on my company LAN via DHCP). When I disconnect the VPN, requests for *.myhomedomain.com go back to the local LAN DNS server (e.g. all requests are going there now). I'm looking for suggestion at to how this can be accomplished.

    Read the article

  • Notepad++ incorrect syntax highlithing?

    - by user360919
    So I want to build a XHTML 1.0 Strict based website. Using Notepad++ for syntax highlighting came as an idea to me. But when I tried to put the XML declaration (as stated in the spec, proper XHTML pages should use a XML declaration and be served as application/xhtml+xml) I can't get the entire document highlighted propperly. Here is the code I used for a basic page: <?xml version="1.0" encoding="UTF-8" standalone="no" ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en-us" lang="en-us"> <head> <meta http-equiv="Content-Type" content="application/xhtml+xml; charset=UTF-8" /> <title>Page</title> <script type="application/javascript"> alert("A perfectly valid xHTML page..."); </script> <style type="text/css"> #test { text-align: center; } </style> </head> <body> <h1 id="test">TEST</h1> </body> </html> Paste this in Notepad++ and you'll see that it won't highlight the code between <script type="application/javascript"> and </script> (it renders its background white) if language is set to XML. If I set the language to HTML, then the script gets correctly highlighted but the XML declaration is not. What to do? How to make a hybrid language - combination of XML and HTML?

    Read the article

  • 3-4 old computers = general purpose cluster?

    - by TheLQ
    I have 3 old computers lying around right now running a P2 at 800 MHz(?), Intel Mobile 1.6 GHz, AMD Athlon XP 2000+ at 1.66 GHz, and (might not use this) P4 at 2.7 GHz, all with 512 MB Ram, and am considering clustering them together for fun/knowledge. They would be running an undecided version of linux, preferably ubuntu based. The issue is what I want to use it for: general computing and occasional video encoding. By general computing I mean day to day tasks. However I'm not sure if every program started by a single X session is going to exist on the same machine, defeating the purpose of such a system. Will programs be split up or exist on one machine? Second, assuming this is running 100baseT ethernet (not sure if the PCI slot itself could handle Gigabit), would the speed of having a program exist over the network be an issue? It seems that the constant asking of various things in RAM would be quite slow. And before you say "buy another computer!", that's not the point of this question. I'm asking would it be usable, not necessarily practical. And yes I know, this is going to be extreamly power consuming.

    Read the article

  • Is there any way to get spotlight or media browser in OSX (Snow Leopard) to index and recognize meta

    - by jaydles
    It seems silly to go to all the trouble to assign "Face" data to thousands of photos, but not make it possible to use that data to locate them outside of that application. I know that that metadata is stored in the "library" database for Aperture/iphoto, rather than on the actual files (which is too bad). And I can even potentially see why it might create challenges for spotlight to use it, since spotlight if presumably a file index system, not a media organizer, but surely the media browser used across the other OSX apps is intended to use it? The media browser's whole purpose seems to be to let you easily locate and reference the items you organize in one of the ilife apps (iphoto or Aperture, in this case) from the others (say, imovie, or Mail). It's particularly vexing since the photo app on the iphone sorts by faces by default. Additionally, the mac-based media browser does access smart albums and folders, so you could establish a workaround by creating a smart album for each "face" or place, or tag, and access them that way, but it seems like there must be an easier way. Am I missing something?

    Read the article

  • Sizing Switches for Storage and Production

    - by Untalented
    Couple questions. Should you always completely separate the storage network switches from production switches or are VLANs fine to segment this traffic? Is there a golden rule here? How do you properly size a switch for your environment based on the specifications the manufacturer provide (Throughput, Forwarding Throughput, Stacking Throughput, Max Mac)? If you have two switch options and one has a maximum Mac address of 8,000 vs. another with 16,0000. What does this really mean to me? How do make sure one vs. another is sized properly for me? Besides VLAN and Jumbo Frame support, is there any other "Must" haves for a virtual environments production or storage networks? There is a wealth of knowledge on sizing SANs and such, but this seems equally important and it's quite challenging to find as much information. -- Just to add some tidbits of information for the environment. This setup above is referring to the data centers which supports two different locations which have about 100 users between the two in total. The storage traffic will be iSCSI and will be 3 ESXi Hosts and one SAN housing about 2.7TB of data. Since there is currently no storage network in place (no SAN), I'm having a hard time regarding #2 to really determine what backplane throughput and switch specifications will be sufficient.

    Read the article

  • Multiple Screen - Keyboard Sharing

    - by nhbdesign
    I run a small architectural firm with several drafters employed, I'm currently setting up a new office space and one of the things on top of my list is a figuring out a way to keep tabs on my drafters work and being able to collaborate in real time. Here's the challenge, they sit in a separate large cubicle room and I'm on the other end of the hallway, the way it is now; every time they’ve got some question on how to proceed on a certain design, they would come all the way to my office, I'd open their file (in read only) give some ideas, save-as new file, they go back copy paste... in short, nonsense. What I've been thinking of is to setup a hardwired solution that should enable me to have an extra monitor on my desktop which is hardwired (through KVM or something) to each of my employees workstations serving as a secondary display, so that I can watch live what they do, interact with them just as if they would have an extra keyboard and monitor in my office, except; I don’t want to have on my desk a separate monitor for each employee.. so I'd want them to be tiled on a single large screen, watching all screens alive, and whenever they ask me (or I just decide..) to step in, I just click on any tile and hurray, I'm in, editing and saving in real time on their workstation. I also want to reserve the option when I want to, to just use that monitor as just an extra screen for my workstation. Is something like that possible in 2013? P.S. I know of TeamViewer and similar internet/software based stuff, but I'm specifically looking for something solid hardwired and maintenance free, and also something that would allow to watch without my employees getting notified every time I do so (I’m not a tough boss though...).

    Read the article

  • Removed Old Domain Trust. Now Progress (9.1D) can't open DB File

    - by RLH
    My company has an old server, running Progress 9.1D on a Windows 2000 VM, which was used by our company OS (Vantage 6 by Epicor.) Vantage was our primary OS for a very long time. About 2 years ago, we migrated to a larger, corporate OS and we cancelled our service contract with Epicor. Yesterday, we removed an AD trust between the corporate domain and our old AD domain we used in the days of Vantage. After restarting the virtual server, I have been able to start the ProService for 9.1D Windows service, however, I can not get Vantage to start back up. When I run the application, I get the error in the message listed below. Transcript: ** Could not connect to server for database [progress db file], errno 0. (1432) How can I fix this? FYI, I haven't had to work with Progress in years and even then I wouldn't have considered myself a "novice"-- I'm even less knowledgeable than that title would suggest. Vantage had a lot of internal tools and I recall that Epicor support managed to prevent .pf scripts from being executed. If there was a Progress specific patch that needed to be applied, you had to do it within the Vantage software OR they had to remote into the machine to fix this. I may not be able to run a .pf script but I do know that I can log into the console-based server application. (Yes, I can't even recall which utility that was called. It is sad.) It's been a long time and I never had to digg into Progress that much. Please help and feel free to ask questions. If you need more info, I'll update this post.

    Read the article

  • Determine which user initiated call in Asterisk

    - by adaptive
    I had the following code in my extensions.conf file: [local] exten => _NXXNXXXXXX,1,Set(CALLERID(name)=${OUTGOING_NAME}) exten => _NXXNXXXXXX,n,Set(CALLERID(num)=${OUTGOING_NUMBER}) Now I want to change this code to set the CallerID and number based on the user/extension that is making the call. In fact I have four(4) users/extensions in my sip.conf and only one of them (the one I use for business) is supposed to send a different caller id/number. Everything is in the same context (for simplicity) since all lines need to be able to pick up an incoming call. The only difference is when line1 needs to make a call, it has to send a different caller id/number and use a different provider. This is what I have so far: [local] exten => _NXXNXXXXXX,1,Set(line=${SIP_HEADER(From)}) exten => _NXXNXXXXXX,n,Verbose(line variable is <${line}>) exten => _NXXNXXXXXX,n,Set(CALLERID(name)=${IF($[ ${line} = line1 ]?${COMPANY_NAME}:${FAMILY_NAME})}) exten => _NXXNXXXXXX,n,Set(CALLERID(num)=${IF($[ ${line} = line1 ]?${COMPANY_NUMBER}:${FAMILY_NUMBER})}) exten => _NXXNXXXXXX,n,Dial(${IF($[ ${line} = line1]?SIP/${EXTEN}@${COMPANY_PROVIDER}:SIP/${EXTEN}@${FAMILY_PROVIDER})}) I really don't know if this is correct and I'm afraid to commit these changes to my extensions.conf before validating. Any help will be greatly appreciated.

    Read the article

  • MSSQL 2008 login failed for windows authentication

    - by Force Flow
    I'm running Microsoft SQL 2008 on a Windows 2008 Server. The MSSQL server server authentication is set to SQL Server and Windows Authentication mode. I have created an active directory security group "xyz app users". I have added a normal user (without any active directory admin privledges) and a user with domain admin privledges to the "xyz app users" group. I have added the group to the MSSQL management console as a login user. This group is a member of the public server role and is mapped to two databases. On a workstation, when the normal user is logged in, I configure a DSN ODBC connection, and I'm able to successfully create the DSN and test the SQL connection. However, when I'm logged in as the user with domain admin privledges, when I attempt to configure the DSN ODBC connection, I can't get past the login ID configuration screen. If I select "windows authentication" and click "next", I get an error: Connection failed: SQLState: '28000' SQL Server Error: 18456 [Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user 'mydomain\myuser' On the server's application event logs, this error appears: Login failed for user 'mydomain\myuser'. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: 172.x.x.x] And in MSSQL's event logs: Error: 18456, Severity: 14, State: 11 Solutions that I've seen so far do not seem to fit this situation (some solutions I've seen are only applicable when the BUILDIN\Administrator is being used locally on the server, which is not the case here).

    Read the article

< Previous Page | 828 829 830 831 832 833 834 835 836 837 838 839  | Next Page >