Search Results

Search found 25123 results on 1005 pages for 'domain model'.

Page 872/1005 | < Previous Page | 868 869 870 871 872 873 874 875 876 877 878 879  | Next Page >

  • Can you see something wrong in my working .htaccess?

    - by AlexV
    OK, after many search, trial and errors I've managed to create an .htaccess that do what I wanted (see explanations and questions after the code block): <IfModule mod_rewrite.c> RewriteEngine On #1 If the requested file is not url-mapper.php (to avoid .htaccess loop) RewriteCond %{REQUEST_FILENAME} (?<!url-mapper\.php)$ #2 If the requested URI does not end with an extension OR if the URI ends with .php* RewriteCond %{REQUEST_URI} !\.(.*) [OR] RewriteCond %{REQUEST_URI} \.php.*$ [NC] #3 If the requested URI is not in an excluded location RewriteCond %{REQUEST_URI} !^/seo-urls\/(excluded1|excluded2)(/.*)?$ #Then serve the URI via the mapper RewriteRule .* /seo-urls/url-mapper.php?uri=%{REQUEST_URI} [L,QSA] </IfModule> This is what the .htaccess should do: #1 is checking that the file requested is not url-mapper.php (to avoid infinite redirect loops). This file will always be at the root of the domain. #2 the .htaccess must only catch URLs that don't end with an extension (www.foo.com -- catch | www.foo.com/catch-me -- catch | www.foo.com/dont-catch.me -- don't catch) and URLs ending with .php* files (.php, .php4, .php5, .php123...). #3 some directories (and childs) can be excluded from the .htaccess (in this case /seo-urls/excluded1 and /seo-urls/excluded2). Finally the .htaccess feed the mapper with an hidden GET parameter named uri containing the requested uri. Even if I tested and everything works, I want to know if what I do is correct (and if it's the "best" way to do it). I've learned a lot with this "project" but I still consider myself a beginner at .htaccess and regular expressions so I want to triple check it there before putting it in production...

    Read the article

  • Exchange 2013 really slow outside of localhost

    - by ItsJustJP
    We've got a 12 core xeon, 24GB of ram 2012 server. We've recently migrated from exchange 2010 (which was on another server) to exchange 2013 which resides on our new 12 core server. Accessing the OWA on the exchange server is fine; it's very quick and responsive however accessing it via any other computer connect to the domain via a 1 gpbs connection and it'll take 10-15 seconds to load. Also running slow is public calenders that people in my place need to access, again taking 10-15 seconds to access and can sometimes cause outlook to not respond. Further to that we have phones that connect via the internet (of course) to the exchange so people can get work emails when they are out of the office. Guess what, this is also running slow. I've have search for many solutions and have tried changing outlook authentication methods but there is no change in speed. The old exchange 2010 server no longer exists but there was no problem before the migration. Has anyone got any suggestions? Thanks :) Must also mention that server 2012 that exchange 2013 is installed on is also the DC. Update: It would appear that any connection via https is slow. It took more than 15 mins for an outlook client to download 50MB of emails (outlook anywhere).

    Read the article

  • nginx configuration file explained

    - by Chris Muench
    I have a few questions about this configuration file "default" in /etc/nginx/sites-enabled. It is shown below. server { root /usr/share/nginx/www; index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { proxy_pass http://127.0.0.1:8080; } location /doc { root /usr/share; autoindex on; allow 127.0.0.1; deny all; } location /images { root /usr/share; autoindex off; } } There is no "Listen" directive, how does it know to default to 80 The server_name is localhost, how does another domain work? Why is the location directive embedded in the server directive? Does that mean these locations ONLY apply to this server? None of my configs have listen 80 default_server; how does nginx then pick what configuration to use?

    Read the article

  • Working with an external button box

    - by Scott
    I tried this question on Stack Overflow, but I was pointed here, so here goes: For a new project for myself, I am looking for a way to be able to (for example) open a pop-up window on my laptop, by pressing a button on an external device (to be build by myself, or at least bought) connected with USB. Basically I would be looking at something like a Arduino or Raspberry (IF I am looking in the right direction) with buttons on it, and as soon as I hit a button on the external box with physical buttons, a command activates on my laptop and for example opens a popup window in which I can input tekst. Does anyone know: 1) if it is possible to do this at all. 2) What equipment is needed for the external box, what programming is needed. I preffer .net (dot net) but maybe it can only be done with software from the external box. If anyone can point me in the right direction, like make/model of the external box or websites I would be very happy. I have knowledge of Visual Studio/.net but I am willing to learn other languages if .net is not an option for this project. Thanks in advance Scott PS: If anyone knows of some better tags, or at least knows what I mean and needs me to edit the question, please do tell me... I am new on Stack Overflow/Superuser.

    Read the article

  • Join multiple consecutive SQLite database dump files into 1 common database? Purpose: Search through ENTIRE Chrome Browsing History

    - by porg
    Google Chrome 's default web browsing history search engine only lets you access the records of the recent 100 days. Nevertheless in your application data, Chrome keeps your entire browsing history in SQLite database files, with the file naming scheme of "History Index YYYY-MM". I am looking for a way to search… …through my entire browsing history, …with sophisticated filters (limit search terms to certain fields such as URL, domain, title, body text; wildcard or regex terms, date ranges). … in … …either some ready-made software. eHistory came close, as it can limit terms to fields, but it lacks wildcards/regexes, and has the same limited time horizon as the default search. Beyond that, I could not find any suited Chrome extension or standalone (Mac) app. …or a command line to join multiple SQLite database files into one database, which I can then query (with the full syntax power). In the spirit of the pseudo code below: Preferred this way: sqlite --targetDatabase ChromeHistoryAll --importFiles /path/to/ChromeAppData/History\ Index* --importOnlyYetUnknownFiles Or if my desired feature --importOnlyYetUnknownFiles is not possible (feature could also be called "avoid duplicate imports by checking UIDs"), then by explicitly only importing files, of which I know, that they have yet not been imported into the ChromeHistoryAll database: cd ChromeAppData; sqlite --databaseTarget ChromeHistoryAll --importFiles YetNotImported1 YetNotImported2 YetNotImported3 All my queries I would then perform in the database "ChromeHistoryAll" P.S.: Additional question of general interest: Is there a way to perform a database query in a temporary database which was created on-the-fly from multiple files? Like: sqlite --query="SQL query" --targetDatabase DbAll --DBtemporaryInRAM --importFiles db1 db2 db3 This is surely not applicable for my Chrome question, as these History Index files have a combined file size of 500MB together, thus such a query would be of bad performance. But it could come handy in other situations.

    Read the article

  • Eee PC 1015BX ram compatibility?

    - by AdrianaMX
    Asus Eee PC 1015BX Operating System Windows 7 Starter, 32bit CPU AMD Fusion APU C60 1.0GHz (dual core) Processor Graphic AMD Radeon HD 6290 (256 MB Shared) Memory DDR3, 1 x SO-DIMM, 1GB I have upgraded the preloaded "Windows 7 Starter" to "Windows 7 Professional" I want to upgrade the ram, from 1gb (factory) to 4 gb. What should i buy? SODDR3, 4GB, 1066MHZ, PC3-8500, 204PIN? or SODDR3, 4GB, 1333MHZ, PC3-10666, 204PIN? I already know that Windows 7 32-bits can't handle 4gb, only 3gb (but 3gb is better than one stick of 2gb). ASUS send me this link, but i think they are wrong, (or Insufficient Information for me) http://www.kingston.com/us/memory/search/Default.aspx?DeviceType=3&Mfr=ASU&Line=Eee%20PC&Model=71404 Thank you. CPU-Z Chipset Memory Type DDR3 Memory Size 750 MBytes Memory Frequency 532.2 MHz (3:16) CAS# latency (CL) 7.0 RAS# to CAS# delay (tRCD) 7 RAS# Precharge (tRP) 7 Cycle Time (tRAS) 20 Bank Cycle Time (tRC) 27 Memory SPD NO INFO AIDA64 North bridge Properties North bridge AMD K14 IMC Supported Memory Types DDR3-800, DDR3-1066 SDRAM Memory Slots DRAM Slot #1 1 GB (DDR3 SDRAM) Integrated Graphics Controller Graphics Controller Type AMD Radeon HD 6290 (Wrestler) Graphics Controller Status Enabled Graphics Frame Buffer Size 256 MB

    Read the article

  • Website is not accessible from server which is using proxy

    - by Bhoot
    I hosted a website in a win 2008 R2 server which runs in private domain. I set up bindings for port 80 and 443 for http & https respectively. Created inbound rule for port 80 and 443 also in windows firewall. After doing all this, i am still not able to access my website from remote machine. IE : Internet Explorer cannot display the webpage. Chrome : Oops! Google Chrome could not find xxxxxx Tried accessing website by ip address but no luck. I tried to ping that server but it says TTL expired in Transit. Now i found some more information over internet to check if the server is using any kind of proxy in between. I found my IP address at www.getip.com, but ipconfig/all gives me a different IP address. Is it really a problem if we use proxy ? I am not sure if i have concluded it correctly. But is there any way out to resolve this issue? Update ::: I figured it out. I have to call that website with external IP address. due to the proxy settings i was not able to call that website by the server's IP or name of that machine.

    Read the article

  • SSL certificates: how to use it?

    - by Rod
    I have a central server and I want to purchase a SSL certificate for it. The architecture is based on this central server and many connected web-servers which are on the client-side (one for each user). A client could access both the main server and its local server. Moreover the two servers exchange data between them. I would like client's web browser to trust all servers and always activating https and a secure connection when connecting to them. Assuming I can name all servers on the same domain name (I was thinking about a wildcard certificate anyway), which kind of certificate or use of it can make these secure connections working? There is the possibility that main server and client side server are not connected for a while. Is possible to activate an https connection for a client to its local server in this case? When I will need to renew or change the certificate, I would like to change it just on the main server avoiding to have the need of touch all the servers on the side of clients. Can I do that in some way?

    Read the article

  • Windows 7 system freezes: would like to know if they could be related to MrxSmb, Event ID 8003 errors

    - by lifegoeson
    First, this question centers around a home network. Is it okay to ask here? Or should I go to SuperUser? (I see less answers over there, but I'll go there if that would be more appropriate.) Network setup: 1 Machine running XP Pro 1 Machine running Win7 Ultimate Comcast router Linksys WRT610N Wireless router The Win7 machine goes into a total, unrecoverable system freeze frequently. I was tearing out my hair trying to ascertain a cause, but I noticed that it usually seems to correspond with performing operations on the shared folders on the XP machine. The last 2 occasions that the Win7 machine froze, I saw this entry for Event ID 8003 from source MrxSmb in the Event log of the XP machine: The master browser has received a server announcement from the computer WIN7_COMPUTER that believes that it is the master browser for the domain on transport NetBT_Tcpip_{320B32A7-FED9. The master browser is stopping or an election is being forced. My question is twofold: Could this cause a Win7 system freeze? If so, what could I configure differently on my network to stop these conflicts over who is the master browser? Thank you for your help!

    Read the article

  • Is a modem required to be programmed when using with an internet provider?

    - by Tim
    I wonder if a modem is required to be programmed when using with an internet provider? If yes, what is the purpose of programming a modem? Do both a DSL and a cable ISP both require a modem to be used in an individual home? For example, I have a Motorola modem SURFboard Model:SB5101, Customer S/N: xxx S/N? xxx HFC MAC ID: xxx USB CPE MAC ID: xxx a coil of cable and a splitter from Comcast High-Speed internet Self-Installation Kit, which were bought 5 years ago, when I purchased Comcast internet service from its retailer www.comcastoffers.com. With them, I was hoping to reduce the amount of fee by avoiding to ask Comcast people to come over to install. But I remember at that time Comcast sent its technician here, dismissed my idea of self-installation, saying they needed to use their own modem and charging me a hefty fee, and so my equipments have never been used. I haven't been using Comcast for a long time. I wonder if my modem, cable and splitter (brand new, never used) are still good to use with an internet provider such as Comcast? If needed, we can ignore their policy and just consider the technology side? Or they are not good to use and I must throw them away like trash? Thanks and regards!

    Read the article

  • netbook screen stays black

    - by sam113101
    I have an acer aspire one netbook. The screen is black but the computer turns on (LED's are on, fan is spinning, etc.) By black I mean absolutely no backlight. I tried to remove the battery and power it on to "discharge" it (I read that on the Internet, not sure if that ever fixes anything), but no luck. I also tried to replace the RAM stick with another one (which I know for sure is working properly), still no luck. I tried to connect an external monitor and switch to it (fn + f5 on this particular model), still no luck, nothing on the external monitor. I read that flashing the BIOS could fix it (http://community.spiceworks.com/how_to/show/22042-acer-aspire-one-black-screen-of-death), I tried to flash it but basically it doesn't do anything when I power it on with the usb thumb drive. No blinking power button. To me it sounds like it might be a dead motherboard, a dead RAM slot (there's only one), or the BIOS thing. I would like to rule out the BIOS possibility, but I need help. The reason I ruled out the dead screen possibility is that it did not switch to the external display when I pressed fn + f5, am I wrong by assuming so? Thank you for your help.

    Read the article

  • Create a wifi hotspot in a place where an authentication is required [closed]

    - by SoftTimur
    I live in a residence where Internet is provided via cable. Once the computer is connected to the cable, launching a browser will trigger an authentication, I have a username and password to enter, then the internet will be connected. With a gateway (e.g. Wireless Cable Voice Gateway Model CBVG834G) and 2 cables, two PCs can connect to the Internet with my account at the same time. Now the question is, I don't like the cable, and would like to create a wifi hotspot. It seems realizable with the same gateway. According to the instruction on page 2-4 of the manual: Enter http://192.168.0.1 in the address field of your Internet browser. Log in to the gateway with either of the default user names, MSO or admin... However, trying to open 192.168.0.1 gives me an error on the browser. Does anyone know what happened? Is it due to the authentication required by my residence? Is there any other way to build a hotspot of wifi? PS: My system is MAC OS

    Read the article

  • Non-volatile cache RAID controllers: what kind of protection is there against NVCACHE failure?

    - by astrostl
    The battery back-up (BBU) model: admin enables write-back cache with BBU writes are cached to the RAID controller's RAM (major performance benefit) the battery saves uncommitted and cached data in the event of a power loss (reliability) If I lose power and come back within a day or so, my data should be both complete and uncorrupted. The downside to this is that, if the battery is dead or low, OR EVEN IF IT IS IN A RELEARN CYCLE (drain/charge loops to ensure the battery's health), the controller reverts to write-through mode and performance will suffer. What's more, the relearn cycles are usually automated on a schedule which may or may not happen in the middle of big traffic. So, that has to be manually disabled and manually scheduled for off-hours if it's a concern. Annoying either way. NV caches have capacitors with a sufficient charge to commit any uncommitted-to-disk data to flash. Not only is that more survivable in longer loss situations, but you don't have to concern yourself with battery death, wear-out, or relearning. All of that sounds great to me. What doesn't sound great to me is the prospect of that flash module having an issue, though. What if it's completely hosed? What if it's only partially hosed? A bit corrupted at the edges? Relearn cycles can tell when something like a simple battery is failing, but is there a similar process to verify that the flash is functional? I'm just far more trusting of a battery, warts and all. I know the card's RAM can fail, the card itself can fail - that's common territory, though. In case you didn't guess, yeah, I've experienced a shocking-to-me amount of flash/SSD/etc. failure :)

    Read the article

  • Apache forwarding without redirecting (application won't follow redirects)

    - by DrewVS
    Recently we had to move /task to /public/task, and I'd like to configure Apache to redirect accordingly. However, using mod_rewrite, though it works in the browser, seems to break applications making api calls to the above location. What happens is the application returns a page with the message saying the page was moved, but the app doesn't follow the redirect. So, is there a way to simply forward any traffic to /task to /public/task without 'redirecting', i.e, returning a redirect status code? EDIT: Here's a little more information. I've found a simple test to clarify what I'm trying to fix. Here is the URL path that needs forwarding: https://mydomain.com/task Needs to go to: https://mydomain.com/public/task If I use curl against the original domain, it just returns a redirect page notice. If I add the -L flag, which tells curl to follow redirects, it then follows the redirect successfully. I assume something very similar is happening in the application (which I don't have access to) that makes calls to the /task URL path. Since I cannot modify the application to make it follow redirects properly, I'm looking for a solution I can implement in Apache.

    Read the article

  • Inexpensive (used) hardware for Xen virtualization test?

    - by Jason Antman
    Virtualization is one of the areas where I could really use some experience. I also run quite a few services (web, mail, dns, etc.) out of my home. Since most of my hardware is getting a bit old (I'm running on stuff that was surplused years ago...) I decided that it's about time I start renewing some things, and also play around with virtualization a bit more. My plan is to setup a SAN box (simple iSCSI target, relatively inexpensive gigE switch), get a pair (for starters) of new servers, and start building some new stuff with Xen, specifically planning on playing with live migration and full virtualization. Does anyone have recommendations for used, older "servers" (really anything in a rack-mount form factor, I'm not too worried about things like iLO/iLOM for the test nodes) that support VT-x/AMD-V? I'm biased to HP, but it looks like they didn't make Proliants with VT-x/Vanderpool processors until G6 (for the DL360) or so, which is way out of my price range. I'm looking in the sub-$300 range (or less, if possible), used, probably Ebay. Any recommendations are greatly appreciated. Edit:And, to catch this before the comments start coming - these are personal systems. I have first-generation Proliants still in use (I got them as corporate surplus in 05, they've been running since then, and probably were running since 01 or 02 prior to being sold). I don't need anything shiny and new - I've got a bunch of old boxes, at least one complete replacement for every model in use, and that's fine for me (and easy on the wallet).

    Read the article

  • Windows 7 Taskbar Icons Randomly Disappearing

    - by Ryker
    This is a problem that i'm totally stumped on. Taskbar Icons that are pinned, as well as open program thumbs keep disappearing. This is totally random, no set pattern. I've tried updating drivers for the vid card, rolling back to old drivers, nothing seems to work. I've also tried the usual uncheck hide the taskbar and recheck, changing the start menu, keeping commonly used programs in start menu, etc. This is using an add-in card which is a necessity. Here is the specs on the machine: New Dell Optiplex 390 i3 2100 Proc 3gb Ram, Windows 7 32-bit. The add in card is an Nvidia 8400 GS model. I've tried different versions of the 8400gs, from MSI, or from other manufactureres and regardless it keeps happening. After the Icons disappear, I can log out and log back in and they return, but disappear again. I can reinstall the drivers for the vid card and it fixes it for a few days, and then they disappear again. There isn't any set pattern of days that it happens, software being used, etc. Another user in the office had the same problem with the exact same machine, but updating the vid card drivers fixed the issue. Anyone?

    Read the article

  • Printer irregularly producing garbage output

    - by John Gardeniers
    Every now and then instead of getting the proper output we get numerous pages, mostly with just a single line, of output which appears to be the raw PCL. My theory is that this happens when the first byte or two of the document is somehow not received by the printer, which then doesn't know how to interpret the rest and does it's best by spitting it out as text. This is a problem I've seen many times over the years but has been popping up more often since we upgraded to Win 7 64 bit, which introduced a number of headaches because of the HP lack of real support for 64 bits. It also appears to happen most often when printing PDF files. We have tried several different PDF readers in addition to Adobe's own but that hasn't helped. While we mainly use HP printers, and the problem is not limited to any particular model, I've also seen it happen on other brands, albeit to a lesser extent. I've also been unable to discern a difference between printers used via a print server or those connected directly by IP address. It also happens to USB attached printers. Because of the erratic nature of this problem there is precious little I can think of to try and debug it, so I'm after any ideas that might help to eliminate it.

    Read the article

  • What is the alternative of Apache's global Alias in IIS? (e.g. Alias /phpMyAdmin "c:/AppServ/www/phpMyAdmin")

    - by Sk8erPeter
    I know there's an "Add Virtual Directory..." option in every given sites in IIS with which I can set up e.g. phpMyAdmin's path to be reached with prepending /phpmyadmin to the address (e.g. http://example.com/phpmyadmin), but isn't there a "global" setting similar to Apache's Alias? For example, in Apache this setting looks like this: <IfModule mod_alias.c> Alias /phpMyAdmin "c:/AppServ/www/phpMyAdmin" Alias /phpmyadmin "c:/AppServ/www/phpMyAdmin" </IfModule> This way I reach phpmyadmin with every hosts. (http://example1.com/phpmyadmin, http://example2.com/phpmyadmin also does work) But in IIS, do I have to add a virtual directory to every sites? I'm just curious, because we would like to serve some domain's content, so there would be multiple sites. It would be more comfortable to do it once (or have the opportunity to remove it once), but if I have to, I do add a virtual directory for each sites. (I know, maybe it's the better solution, because I can have a site where I don't want phpmyadmin to be available, but I was just curious.) Thanks in advance!

    Read the article

  • What are the right questions to ask when deciding whether to use Chef or Puppet?

    - by John Feminella
    I am about to start a new project which will, in part, require deploying many identical nodes of approximately three different classes: Data nodes, which will run sharded instances of MongoDB. Application nodes, which will run instances of a Ruby on Rails application and an older ASP.NET MVC application. Processing nodes, which will run jobs requested by the application nodes. ALl the nodes will run on instances of Ubuntu 10.04, though they will have different packages installed. I have some familiarity with Chef from previous projects, though I don't consider myself an expert. In an effort to do due diligence, I have been investigating alternative possibilities. We have a number of folks in-house who are long-time Puppet users, and they have encouraged me to take a look. I am having trouble evaluating both choices, though. Chef and Puppet share many of the same domain terminology -- packages, resources, attributes, and so on, and they have a common history that stems from taking different approaches to the same problem. So in some sense they are very similar. But much of the comparison information I've found, like this article, is a little outdated. If you were starting this project today, what questions would you ask yourself to decide whether you should use Chef or Puppet for configuration management? (Note: I don't want answer to the question "Should I use Chef or Puppet?")

    Read the article

  • Monitoring Between EC2 Regions

    - by ABrown
    I'm working on a small EC2 project that involves a handful of servers in two different regions (US East and EU West). My first task is to implement a Nagios monitoring solution. Monitoring within a region is simple - I just use the private domain names/IPs, but I'm a little unsure of the best way to handle monitoring the second region without setting up a second Nagios install. The environment is fairly static, so I'm not going to be scripting the configuration with the EC2 tools just yet. As I see it, I have two options. Two Nagios installations (which is over-kill for the small number of servers I'm dealing with). Pros: I don't have to alter the group permissions nor do I have to pay for the traffic, redundancy in the monitoring solution - I could monitor the Nagios servers. Cons: two installations to deal with and I'd need to run another server instance. Have the single installation monitor both regions. Pros: one installation to deal with. Cons: slightly reduced security - security group will have to have NRPE (5666) opened for one source IP and also paying for a small amount of bandwidth at the Internet rate for data transfer between the regions. I guess my question is - how have others handled this problem and what are your recommendations? Thanks!

    Read the article

  • Virtual hosting in lighttpd?

    - by lighttpdnewbie
    Ok, here it goes... I've seen some other posts dealing with this, but it didn't help that much. I am using windows XP. My problem is with trying to get lighttpd working with virtual hosts. Now, I managed to get everything up and working with the default /htdocs and the default page shows up just fine on the internet, but since I have several sites to host, I need virtual hosting. I managed to do it in apache, so I guessed it would work out just fine in lighttpd, but apparently I'm missing something. Ok, let's say I have domain (www.)example.org. I want everyone using that url going to the correct index.html, obviously. Let's say that index.html is in directory "websites/website1" placed under the lighttpd dir. (thus, the full path is c:/ProgramsFiles/lighttpd/websites/website1/index.html) Now: how, exactly, do I set up my virtual host (in the config file)? In detail, please, since I've tried for hours with the vague hints I got from fora and such, but it doesn't work. Also; is there something additional to do? Change the "server.bind" or get rid of the default server.document-root, or something? I appreciate all the help you can give! Especially if it's a verbatim/step-by-step solution you're offering! ;-p Edit: And, yes, my mod_simple_vhost has been enabled.

    Read the article

  • Can't connect to FTP server from a specific location

    - by wv_pip
    Last week while uploading website files to our server via FTP, the transfer failed. Ever since then, I haven't been able to connect to the server from work. I can connect just fine from home, or by using an FTP app on my cell phone as long as I'm on the cell network. I can't access the server from any machine on my work network. It's not a credential issue, either. The error message that I always get says that a connection cannot be established, and I am never prompted for my credentials. I have changed absolutely nothing on our domain controller or our firewall/router. I've contacted our ISP (who hosts the website/FTP server) and they can't find anything wrong on their end. They insist that it must be something here at the office that is blocking access. I've also tested access to other FTP servers (ea.com, nvidia.com, etc.) so I know that port 21 is not being blocked. I'm totally stumped. Any help is much appreciated. EDIT: wireshark info here: http://www.cloudshark.org/captures/85a118ae9296?filter=ip.dst%3D%3D66.118.64.208

    Read the article

  • TGT validation fails, but only for one user

    - by wzzrd
    I'm seeing the weirdest thing here. I have a couple of RHEL3, 4 and 5 machines that validate user credentials through Kerberos with an Active Directoy domain controller as their KDC. This works for all of my users, save one. There is one account that is unable to log into RHEL3 Linux machines and generates the following errors there: May 31 13:53:19 mybox sshd(pam_unix)[7186]: authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.0.1 user=user May 31 13:53:20 mybox sshd[7186]: pam_krb5: TGT verification failed for `user' May 31 13:53:20 mybox sshd[7186]: pam_krb5: authentication fails for `user' Other accounts, like my own, are fine: May 31 17:25:30 mybox sshd(pam_unix)[12913]: authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.0.1 user=myuser May 31 17:25:31 mybox sshd[12913]: pam_krb5: TGT for myuser successfully verified May 31 17:25:31 mybox sshd[12913]: pam_krb5: authentication succeeds for `myuser' May 31 17:25:31 mybox sshd(pam_unix)[12915]: session opened for user myuser by (uid=0) As you can see, TGT validation fails. This only happens for this specific account, not for any other. The failing useraccount's password has been reset, I inspected both user objects in Active Directory, but I see nothing out of the ordinary. If I have the failing useraccount log into a RHEL4 or 5 box, there is not problem, so it must be RHEL3 specific, but the fact that only one account suffers from this, alludes me. Maybe someone has seen this before?

    Read the article

  • Merging two separate DNS zones

    - by cube
    This is a hypothetical question. Let's suppose I have two networks, each with its own DNS server. Network A has names a1.local, a2.local, ... and network B has b1.local, b2.local, .... Zone file for each of the networks looks something like this: $ORIGIN local @ IN SOA .... blah blah blah a1 A 1.2.3.4 a2 A 2.3.4.5 ... for A, and $ORIGIN local @ IN SOA .... blah blah blah b1 A 3.4.5.6 b2 A 4.5.6.7 ... for B. Now I also have a regular internet domain example.com and I want to access the machines as a1.A.example.com, b1.B.example.com, ... How will I have to change the configuration of name servers in networks A and B? (in fact I am writing a super-magic DNS server, currently serving A and B separately, but there is a chance that I will have to add the ability to merge the networks; so I'm interested in knowing the problems which lie ahead of me and how to prepare for the possibility)

    Read the article

  • Print over the internet from a remote linux session locally (on a Windows 7 machine) to the shared printers?

    - by obeliksz
    I'm trying to use a linux virtual machine as a file server for windows clients. I have successfully implemented remote file sharing (samba+ssh) with which I am able to print locally with a little program that I made for this purpose (jetforms style)... but I would like to hear about a somewhat more direct approach. How can I attach the printers to the server, so that I can for example open a file on the remote session and in the print dialogbox I would see my local printers (on the machine from which I have established a remote session)? I guess there should be some kind of putty tunneling, but dont know how. I have a windows 7 machine locally; there is a CentOS 6 VM over the internet. It has ssh, cups, and samba. I have found a question which asks the opposite: there is a windows based server to connect form linux but that windows has a domain, mine is just a simple windows workstation that is behind NAT and has a dynamic IP. That question is: Print from Linux to Windows networked printer.

    Read the article

< Previous Page | 868 869 870 871 872 873 874 875 876 877 878 879  | Next Page >