Search Results

Search found 5919 results on 237 pages for 'regex matching'.

Page 204/237 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • NGINX returning 404 error on a valid url

    - by Harrison
    We have a site that runs PHP-FPM and NGINX. The application sends invitations to site members that are keyed with 40 character random strings (alphanumerics only -- example below). Today for the first time we ran into an issue with this approach. The following url: http://oursite.com/notices/response/approve/1960/OzH0pedV3rJhefFlMezDuoOQSomlUVdhJUliAhjS is returning a 404 error. This url format has been working for 6 months now without an issue, and other urls following this exact format continue to resolve properly. We have a very basic config with a simple redirect to a front controller, and everything else has been running fine for a while now. Also, if we change the last character from an "S" to anything other than a lower-case "s", no 404 error and the site handles the request properly, so I'm wondering if there's some security module that might see something wrong with this specific string... Not sure if that makes any sense. We are not sure where to look to find out what specifically is causing the issue, so any direction would be greatly appreciated. Thanks! Update: Adding a slash to the end of the url allowed it to be handled properly... Would still like to get to the bottom of the issue though. Solved: The problem was caused by part of my configuration... Realized I should have posted, but was headed out of town and didn't have a chance. Any url that ended in say "css" or "js" and not necessarily preceded by a dot (so, for example, http://site.com/response/somerandomestringcss ) was interpreted as a request for a file and the request was not routed through the front controller. The problem was my regex for disabling logging and setting expiration headers on jpgs, gifs, icos, etc. I replaced this: location ~* ^.+(jpg|jpeg|gif|css|png|js|ico)$ { with this: location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { And now urls ending in css, js, png, etc, are properly routed through the front controller. Hopefully that helps someone else out.

    Read the article

  • Things to check for an internet-facing email server.

    - by Shtééf
    I'm faced with the task of setting up a public-internet-facing email server, that will be relaying mail for all of our other servers in the network. While the software in itself is set up in few keystrokes, what little experience I have with managing an email server has thought me that there are tons of awkward filtering techniques employed by other email systems. Systems that my own server will inevitably interact with a some point. Hence, my questions: What things should be kept in mind and double checked when setting up an email server? What resources are available for checking if my email server is set-up correctly? I'm specifically NOT looking for instructions for any given mail server, such as Exchange or Postfix. But it's okay to say: “you should have X and Y in your set-up, because when talking to server software Z, it typically tries to weed out open relays by checking for these.” Some things I've discovered myself: Make sure forward and reverse DNS are set up. Mail servers tend to do a reverse lookup for the peer IP-address when receiving. Matching a reverse look up with a follow-up forward lookup is probably employed to weed out open relays run through malware on home networks. Make sure the user in the From-address exists. The From-address is easily spoofed. A receiving mail server may try to contact the mail server in the From-domain, and see if the From-user actually exists.

    Read the article

  • Trouble getting started with the STEALTH monitoring package

    - by dlanced
    Is anyone here familiar with the Linux-based STEALTH package (for monitoring FS integrity of client systems)? I'm trying to get started with a very simple configuration, but I'm running into trouble (this is running under Ubuntu 14.04): Config line `USE BASE/root/stealth/10.0.0.79' invalid STEALTH (2.11.02) started at Fri, 30 May 2014 15:25:00 +0000 Program terminated due to non-zero exit value for -type f -exec /usr/bin/sha1sum {} \; (EOC Fri May 30 15:25:00 2014 127) Stealth is creating a binary tmp file in the Stealth server root and generating a "report" file in the start directory, but not much else. Regarding the "USE BASE...invalid" error, and just to be sure, I manually created the directories in /root, but it didn't help. And, by the way, I am running stealth with sudo. Everything seems to be configured correctly: I'm able to ssh into root@client from the stealth machine without a password Here's my "policy" file (I've removed the email directives just for simplicity): DEFINE SSHCMD /usr/bin/ssh [email protected] -T -q exec /bin/bash --noprofile DEFINE EXECSHA1 -xdev -perm +u+s,g+s ( -user root -or -group root ) \ -type f -exec /usr/bin/sha1sum {} \; USE BASE/root/stealth/10.0.0.79 USE SSH ${SSHCMD} USE DD /bin/dd USE DIFF /usr/bin/diff USE PIDFILE /var/run/stealth- USE REPORT report USE SH /bin/sh GET /usr/bin/sha1sum /root/tmp LABEL \nchecking the client's /usr/bin/find program CHECK LOG = remote/binfind /usr/bin/sha1sum /usr/bin/find LABEL \nsuid/sgid/executable files uid or gid root on the / partition CHECK LOG = remote/setuidgid /usr/bin/find / ${EXECSHA1} LABEL \nconfiguration files under /etc CHECK LOG = remote/etcfiles \ /usr/bin/find /etc -type f -not -perm /6111 \ -not -regex "/etc/(adjtime\|mtab)"\ -exec /usr/bin/sha1sum {} \; Any ideas? Thanks,

    Read the article

  • Using AWStats, cannot get MaxNbOfExtraX to limit rows in Extra Report

    - by user137519
    Folks, got something really odd here I'd like to resolve. I've been using Awstats and have a couple of extra reports. I cannot get any of them to limit the rows using MaxNbOfExtraX to work. Here are two examples: ExtraSectionName1="Top 100 Searches" ExtraSectionCodeFilter1="200 304" ExtraSectionCondition1="URL,/search/search_post.php" ExtraSectionFirstColumnTitle1="Search Parameters" ExtraSectionFirstColumnValues1="QUERY_STRING,(.*)" ExtraSectionFirstColumnFormat1="QueryParameters: %s" ExtraSectionStatTypes1=HL ExtraSectionAddAverageRow1=0 ExtraSectionAddSumRow1=1 MaxNbOfExtra1=100 MinHitExtra1=4 ExtraSectionName2="Top 100 Downloads" ExtraSectionCodeFilter2="200 304" ExtraSectionCondition2="URL,/filedownload.php" ExtraSectionFirstColumnTitle2="File Downloads" ExtraSectionFirstColumnValues2="QUERY_STRING,(.[0-9]{5})(h|p)?." ExtraSectionFirstColumnFormat2="File ID: %s" ExtraSectionStatTypes2=HL ExtraSectionAddAverageRow2=0 ExtraSectionAddSumRow2=1 MaxNbOfExtra2=100 MinHitExtra2=3 According to all documentation I've read the MaxNbOfExtra1 should keep the limit to 100. However when I run this, with the debug messages enabled I get a message indicated that the query will be in excess of of 500 and would not run it. I increased the number of ExtraTrackedRowsLimit to 2000 and it would work. But the option I provided should have lowered that. I even tried without the ExtraTrackedRowsLimit with MaxNbOfExtra1=100 but same error: No limit to 100 and the "excess of 500" error. I have the URLWithQuery=1 and my reports do run properly along with my regex filters. I am using MinHitExtra1 to limit the rows and that works, but why can I not get the MaxObOfExtraX option to work. Any ideas? Thanks in advance.

    Read the article

  • VMware Workstation Bridged Network Host UnReachable

    - by user2097818
    VMware Workstation 7 on Win7-64 (Home Premium). I have confirmed this on any guest running on this machine (from winxp to debian). I am using a bridged network connection for my guests (Automatic on VMnet0). All of the network configuration is done with DHCP (including on the host). Problem What I can not do: Ping my host machine from inside any VM. (either shows me "Destination Host Unreachable" or will just timeout) What I CAN do right after power up, with no problems at all. I can connect to the internet from inside the VM I can ping my router from inside the VM I can ping other machines on my network from inside the VM Other machines can ping the VM Other machines can ping the host My host machine can ping the VM (this one is important. read further) Details So I have my router assigned as 192.168.2.1/255.255.255.0, and the router provides the DHCP service (and it seems to be doing so successfully). There are no IP conflicts on the network that I am aware of. All Gateways and Subnet masks are appropriate and matching. My entire workshop is on one single subnet, with one single DHCP server and gateway. There is one method in which I can ping successfully, but it requires an active connection initiated from the host (I start pinging from host to VM). During the period of the active connection, I can successfully ping from VM to host, using explicit IP address. As soon as the host connection is closed, the VM ping starts hanging with the same old messages. My Thoughts This really feels like a firewall problem, but I have turned off all firewalls on host and VM, powered down the network, powered back up, and the problem still persists. And if it was firewall, why would only the IP address associated with bridged VM networks be blocked. I feel as though my host operating system (Win7) is somehow configured incorrectly, or, VMware Workstation is configured incorrectly from the host side. Although I have done my best to put everything in default, I feel like I am missing something silly.

    Read the article

  • Combine multiple rows into one

    - by Jim
    I am trying to combine multiple rows of data into one. Column A contains the value on which the groupings will be based -- rows whose Column A values match will be combined into one row. My range extends from column A through X so I need a matching row of data to start in column Y. Example: +--------------+ ¦ 1001 ¦ A ¦ C ¦ ¦ 1001 ¦ B ¦ D ¦ ¦ 1002 ¦ A ¦ E ¦ ¦ 1002 ¦ B ¦ F ¦ ¦ 1002 ¦ C ¦ G ¦ +--------------+ Desired Result: +------------------------------+ ¦ 1001 ¦ A ¦ C ¦ B ¦ D ¦ ¦ ¦ ¦ 1002 ¦ A ¦ E ¦ B ¦ F ¦ C ¦ G ¦ +------------------------------+ The VBA code I am currently using is not taking the entire contents of the matched row. It is only taking the data in the 2nd column and moving it up. VBA Code: Sub Mergeitems() Dim cl As Range Dim rw As Range Set rw = ActiveCell Do While rw <> "" ' for each row in data set ' find first empty cell on row Set cl = rw.Offset(0, 1) Do While cl <> "" Set cl = cl.Offset(0, 1) Loop ' if next row needs to be processed... Do While rw = rw.Offset(1, 0) cl = rw.Offset(1, 1) ' move the data Set cl = cl.Offset(0, 1) ' update pointer to next blank cell rw.Offset(1, 0).EntireRow.Delete xlShiftUp ' delete old data Loop ' next row Set rw = rw.Offset(1, 0) Loop End Sub

    Read the article

  • Discover the public ip of a network without being connected

    - by Martin Trigaux
    Let say, I'm next to a network and can see the traffic (with airodump or similar tool) but can not decipher it (because I am not connected on the network). Is it possible to discover the public ip address of the network ? I know the MAC address of the users connected on the network but do I know the one of the router ? If yes, maybe there is a way to do the matching. I know IP addresses are not forever but some addresses are static and never change. Maybe there is a database of MAC address having recorded that. Google has a database that match MAC address and geographical coordinates so why not with IP addresses ? Other idea, if I know where am I, I can maybe guess the IP range used in the city by the ISP (is it findable ?) and then try to "ping" each IP on the range (if it is a /24, it's possible, even /16 maybe). Will I get some information like the MAC of the box or see some traffic on the network ? These are two ideas I had. I don't know if they are doable, certainly not perfect. Do you think of some others ? By trying several methods, maybe I can get a guess with a bit of luck. Thank you

    Read the article

  • Using Windows Explorer, how to find file names starting with a dot (period), in 7 or Vista?

    - by Chris W. Rea
    I've got a MacBook laptop in the house, and when Mac OS X copies files over the network, it often brings along hidden "dot-files" with it. For instance, if I copy "SomeUtility.zip", there will also be copied a hidden ".SomeUtility.zip" file. I consider these OS X dot-files as useless turds of data as far as the rest of my network is concerned, and don't want to leave them on my Windows file server. Let's assume these dot-files will continue to happen. i.e. Think of the issue of getting OS X to stop creating those files, in the first place, to be another question altogether. Rather: How can I use Windows Explorer to find files that begin with a dot / period? I'd like to periodically search my file server and blow them away. I tried searching for files matching ".*" but that yielded – and not unexpectedly – all files and folders. Is there a way to enter more specific search criteria when searching in Windows Explorer? I'm referring to the search box that appears in the upper-right corner of an Explorer window. Please tell me there is a way to escape my query to do what I want? (Failing that, I know I can map a drive letter and drop into a cygwin prompt and use the UNIX 'find' command, but I'd prefer a shiny easy way.)

    Read the article

  • What's a good solution for file-tagging in linux?

    - by julien
    I've been looking for a way to tag my files and search/filter them based on those tags. Here are my (updated) requirements : any file readable by the user can be tagged freely a user can search for files matching one or several tags files can be moved around without losing the previously associated tags the system could be backed up easily no dependencies on any desktop environment if any gui is involved, there must be a cli fallback I've been hoping for some basic filesystem & coreutils hackery to handle this, but I haven' thought about this hard enough yet. Meanwhile I'll review beagle and metatracker, which have been mentionned here, and see how they perform. Ok so beagle has huge gnome dependencies, and tracker is okish, but still has some dependencies I don't like... Been doing some more research, and the way to go could very well be extended file attributes. That's a native solution for most recent filesystems, but they aren't very well supported yet (most coreutils destroys them by default, cp for example needs the -a flag to preserve them). Would like to hear some thoughts on using them while I try my hand at some hacks myself, eventhough this might warrant a new question.

    Read the article

  • Script to mirror MS SQL Server databases between 2 servers

    - by David W
    Hi I have about 200 sites each of which have 2 servers running MSSQL (2k5 at some sites, 2k8 at others) One server is production and the other is primarily there as a backup. We're rebuilding all of these servers this year and as part of that we will have to set up mirroring for ... a lot ... of databases. Some of these sites have 45 databases so mirroring them manually is going to be a huge pain. I was going to write a batch script which uses SQLCMD to backup the database and log, copies to the secondary server, restores the backup and log with norecovery, creates the endpoints and sets the partner. This in itself isn't too complicated, but i'd love to see what other people have done as i'm not very confident in catching errors using the process i've outlined above. I've seen Tools to manage sql 2008 database mirroring? Which looks really good, but the formatting is jumbled and I can't get it to work. If anyone has any other scripts they've written and are willing to share I'd be eternally grateful. Ideally I'd love to be able to use a script to ensure there are matching endpoints (same ports) on both servers, backup the database, backup the log, copy the backups to second server, restore database and log with norecovery, set the partners on both servers, and somehow confirm that the databases are linked and synchronized. Well, thanks for reading :)

    Read the article

  • How do I recover drivers from other hard disk

    - by Carl
    The drivers for a Cardbus (PCMCIA) card that gives me 2 USB 2.0 ports are on the hard disk from my old laptop. I have lost the driver CD. I have a way to get files from that other hard disk. Which files do I need? The drivers for the card used to be on the following website - the information is still there, except the download links don't work: http://www.ht-link.com/en/DownView.asp?ID=10 - The drivers I need are the first listing - The Win XP drivers for the HT-112NEC. My e-mails to them have not been answered. The information on this card is here http://www.ht-link.com/en/ProductView.asp?ID=106 I already tried connecting that other drive to my new laptop (via USB) and adding the drive to the search criteria when selecting update driver in the Device Manager. It says there isn't a better match, and if I select manual the matching device is not listed. (I don't think "manual" sees drivers on the external hard disk - but only ones on the main drive and/or found listed in the registry.) I would try 'have disk' if I knew exactly what file to point to on the external drive. The drivers are on that hard disk - I installed them there, and used that card on that computer. The new laptop has Windows XP Pro SP3, the old one had Pro SP2 Thanks for any help.

    Read the article

  • UIDs for service users in Mac OS X

    - by LaC
    Some third-party servers should be run under a special user for security reasons (eg, PostgreSQL is typically run by "postgres"). Of course, these service users should not show up in the Mac OS X login windows. I know how to create hidden users using dscl or dsimport, but I'm wondering what the best policy is for assigning UIDs (and matching GIDs). Apple's documentation states that UIDs from 0 to 100 are reserved (pg. 69), but OS X comes with several special users and groups outside that range. I used to use ids from 401 onwards for services, but I noticed that OS X 10.6 has started using that range for groups created by the Sharing pane in System Preferences. What is the recommended ID range to use for third-party services, then? Perhaps I should just use IDs in the 500 range, since all that is needed to hide a user in Snow Leopard is setting his password to "*"? Also, most of Apple's services have names starting with an underscore, with an alias sans underscore; eg, _sandbox and sandbox. Is there any special significance to this? Should I do the same for my services?

    Read the article

  • fail2ban iptable rule wont block

    - by Termiux
    So I set up fail2ban on my Debian 7 server, still I've been getting hit a lot and I dont know why is not blocking properly. The regex works, it recognizes the attempts but it seems the iptables rules it insert wont work, this is how it look iptables ouput looks after fail2ban tries to block. Chain INPUT (policy ACCEPT) num target prot opt source destination 1 fail2ban-courierauth tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 2 fail2ban-couriersmtp tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 3 sshguard all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT) num target prot opt source destination Chain OUTPUT (policy ACCEPT) num target prot opt source destination Chain fail2ban-courierauth (1 references) num target prot opt source destination 1 DROP all -- 216.x.y.z 0.0.0.0/0 2 RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-courierimap (0 references) num target prot opt source destination 1 RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-courierpop3 (0 references) num target prot opt source destination 1 RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-couriersmtp (1 references) num target prot opt source destination 1 RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-postfix (0 references) num target prot opt source destination 1 RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-sasl (0 references) num target prot opt source destination 1 RETURN all -- 0.0.0.0/0 0.0.0.0/0 In the iptables above you can see the "Chain fail2ban-courierauth" rule that added the drop rule for the ip but Im still able to connect!! I can still connect to the server, why isn't it blocking?

    Read the article

  • NGINX returning 404 error on a valid url

    - by Harrison
    We have a site that runs PHP-FPM and NGINX. The application sends invitations to site members that are keyed with 40 character random strings (alphanumerics only -- example below). Today for the first time we ran into an issue with this approach. The following url: http://oursite.com/notices/response/approve/1960/OzH0pedV3rJhefFlMezDuoOQSomlUVdhJUliAhjS is returning a 404 error. This url format has been working for 6 months now without an issue, and other urls following this exact format continue to resolve properly. We have a very basic config with a simple redirect to a front controller, and everything else has been running fine for a while now. Also, if we change the last character from an "S" to anything other than a lower-case "s", no 404 error and the site handles the request properly, so I'm wondering if there's some security module that might see something wrong with this specific string... Not sure if that makes any sense. We are not sure where to look to find out what specifically is causing the issue, so any direction would be greatly appreciated. Thanks! Update: Adding a slash to the end of the url allowed it to be handled properly... Would still like to get to the bottom of the issue though. Solved: The problem was caused by part of my configuration... Realized I should have posted, but was headed out of town and didn't have a chance. Any url that ended in say "css" or "js" and not necessarily preceded by a dot (so, for example, http://site.com/response/somerandomestringcss ) was interpreted as a request for a file and the request was not routed through the front controller. The problem was my regex for disabling logging and setting expiration headers on jpgs, gifs, icos, etc. I replaced this: location ~* ^.+(jpg|jpeg|gif|css|png|js|ico)$ { with this: location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { And now urls ending in css, js, png, etc, are properly routed through the front controller. Hopefully that helps someone else out.

    Read the article

  • Nginx Multiple If Statements Cause Memory Usage to Jump

    - by Justin Kulesza
    We need to block a large number of requests by IP address with nginx. The requests are proxied by a CDN, and so we cannot block with the actual client IP address (it would be the IP address of the CDN, not the actual client). So, we have $http_x_forwarded_for which contains the IP which we need to block for a given request. Similarly, we cannot use IP tables, as blocking the IP address of the proxied client will have no effect. We need to use nginx to block the requested based on the value of $http_x_forwarded_for. Initially, we tried multiple, simple if statements: http://pastie.org/5110910 However, this caused our nginx memory usage to jump considerably. We went from somewhere around a 40MB resident size to over a 200MB resident size. If we changed things up, and created one large regex that matched the necessary IP addresses, memory usage was fairly normal: http://pastie.org/5110923 Keep in mind that we're trying to block many more than 3 or 4 IP addresses... more like 50 to 100, which may be included in several (20+) nginx server configuration blocks. Thoughts? Suggestions? I'm interested both in why memory usage would spike so greatly using multiple if blocks, and also if there are any better ways to achieve our goal.

    Read the article

  • SharePoint blog site won't search local site... you can only search for Mysites and users

    - by Don
    I have a Howto company Blog site that i post to for my clients to access for help. For some reason it has stopped letting anyone search on it. I can search for Mysites or users. But when you drop down the tab to search: This Site: "blog site name" you get the following reply: No results matching your search were found. Check your spelling. Are the words in your query spelled correctly? Try using synonyms. Maybe what you're looking for uses slightly different words. Make your search more general. Try more general terms in place of specific ones. Try your search in a different scope. Different scopes can have different results. I have tried the following command: from the Index server 1-net stop osearch 2-net start osearch 3-iisreset /noforce But still not able to search a local blog site I can only search for users and Sites. please help Don

    Read the article

  • Adventures in Drupal multisite config with mod_rewrite and clean urls

    - by moexu
    The university where I work is planning to offer Drupal hosting to staff/faculty who want a Drupal site. We've set up Drupal multisite with clean urls and it's mostly working except for some weird redirects. If you have two sites where one is a substring of the other then you'll randomly be redirected to the other site. I tracked the problem to how mod_rewrite does path matching, so with a config file like this: RewriteCond %{REQUEST_URI} ^/drupal RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupal/index.php?q=$1 [last,qsappend] RewriteCond %{REQUEST_URI} ^/drupaltest RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupaltest/index.php?q=$1 [last,qsappend] /drupaltest will match the /drupal line and all of the links on the /drupaltest page will be rewritten to point to /drupal. If you put the end of string character ($) at the end of each rewrite condition then it will always match on the correct site and the links will always be rewritten correctly. That breaks down as soon as a user logs in though because the query string is appended to the url so just the base url will no longer match. You can also fix the problem by ordering the sites in the config file so that the smallest substring will always be last. I suggested storing all of the sites in a table and then querying, sorting, and rewriting the config file every time a Drupal site is requested so that we could guarantee the order. The system administrator thought that was kludgy and didn't address the root problem. Disabling clean urls should also fix the problem but the users really want them so I'd prefer to keep them if possible. I think we could also fix it by using an .htaccess file in each site to handle the clean url rewriting but that also seems suboptimal since it will generate a higher load on the server and the server is intended to host the majority of the university's external facing web content. Is there some magic I can do with mod_rewrite to get it to work? Would another solution be better? Am I doing something the wrong way to begin with?

    Read the article

  • rsync --remove-source-files but only those that match a pattern

    - by Daniel
    Is this possible with rsync? Transfer everything from src:path/to/dir to dest:/path/to/other/dir and delete some of the source files in src:path/to/dir that match a pattern (or size limit) but keep all other files. I couldn't find a way to limit --remove-source-files with a regexp or size limit. Update1 (clarification): I'd like all files in src:path/to/dir to be copied to dest:/path/to/other/dir. Once this is done, I'd like to have some files (those that match a regexp or size limit) in src:path/to/dir deleted but don't want to have anything deleted in dest:/path/to/other/dir. Update2 (more clarification): Unfortunately, I can't simply rsync everything and then manually delete the files matching my regexp from src:. The files to be deleted are continuously created. So let's say there are N files of the type I'd like to delete after the transfer in src: when rsync starts. By the time rsync finishes there will be N+M such files there. If I now delete them manually, I'll lose the M files that were created while rsync was running. Hence I'd like to have a solution that guarantees that the only files deleted from src: are those known to be successfully copied over to dest:. I could fetch a file list from dest: after the rsync is complete, and compare that list of files with what I have in src:, and then do the removal manually. But I was wondering if rsync can do this by itself.

    Read the article

  • Problems setting up VLC Sever/client streaming

    - by Ayos
    I'm trying to set up a Linux machine as the server and a Windows XP machine as the client. Both machines are connected to the same local network via a Wi-Fi router. I setup the stream with the following properties : http stream port 8080 play locally And not much else. No firewall on the windows client(Windows firewall is disabled) When I try to open network stream via the client machine(Using VLC or Windows Media Player) I get the following errors: Media Player error code : 0xC00D11B3: Encountered a network Problem. VLC Console: main warning: connection timed out access_mms error: cannot connect to 192.168.1.3:8080 main debug: no access module matching "http" could be loaded main debug: TIMER module_need() : 12625.810 ms - Total 12625.810 ms / 1 intvls (Avg 12625.809 ms) main error: open of `http://192.168.1.3:8080' failed main debug: dead input main debug: repeating item main debug: starting playback of the new playlist item main debug: resyncing on http://192.168.1.3:8080 main debug: http://192.168.1.3:8080 is at 0 main debug: creating new input thread main debug: Creating an input for 'http://192.168.1.3:8080' main debug: using timeshift granularity of 50 MiB, in path 'C:\DOCUME~1\Accer\LOCALS~1\Temp' main debug: `http://192.168.1.3:8080' gives access `http' demux `' path `192.168.1.3:8080' main debug: creating demux: access='http' demux='' location='192.168.1.3:8080' file='\\192.168.1.3:8080' main debug: looking for access_demux module: 0 candidates main debug: no access_demux module matched "http" main debug: TIMER module_need() : 0.461 ms - Total 0.461 ms / 1 intvls (Avg 0.461 ms) main debug: creating access 'http' location='192.168.1.3:8080', path='\\192.168.1.3:8080' main debug: looking for access module: 2 candidates access_http debug: http: server='192.168.1.3' port=8080 file='' main debug: net: connecting to 192.168.1.3 port 8080 qt4 debug: IM: Deleting the input main debug: TIMER input launching for 'http://192.168.1.3:8080' : 13397.979 ms - Total 13397.979 ms / 1 intvls (Avg 13397.978 ms) qt4 debug: IM: Setting an input Need Help. Thanks in advance.

    Read the article

  • Calculating memory footprints using /proc/sysvipc/shm

    - by MarkTeehan
    This is for a SLES 10 database server. One of my servers runs three databases and three app servers; I am analyzing how their shared memory segments grow and shrink to avoid intermittent out-of-memory scenarios. "Top" is hot helpful for this since its calculations for RES and VIRT are inconsistent. I am doing this by matching up the contents of /proc/sysvipc/shm with memory usage reported by the database admin console. I do this by totaling up saving the contents of /proc/sysvipc/shm and then total up "bytes" for all of the segments for the offending userid. This is a large server with hundreds of segments and tens (or hundreds) of GB of allocated memory per userid. However it doesn't match up - the database management software claims to be using around 25% more memory than the total I calculate. Negligible swap space is in use, so I am ignoring that. I am running it as root so I am sure I see all shared memory segments. My question is : is all (significant) allocated memory recorded in /proc/sysvipc/shm, or is this only shared memory (*and not "un-shared" memory?). If this is incorrect, what is the correct way to calculate out the total allocated memory for each userid? Also: I believe doing a 'cat' on this file locks server IPC. I check it every 5 seconds - is it likely that this frequency could be problematic? Thanks! Mark Teehan Singapore

    Read the article

  • Must have local user to authenticate Samba to AD?

    - by Phil
    I've got a CentOS 5.3 server with Samba running. I've joined this server to my domain in the hopes of allowing AD users some access to my Samba shares. I've found that this works, but only as long as the AD username that I'm trying to authenticate with is also a local user on the server. In other words, if I'm trying to access a share, and try to authenticate with the AD username "joe", I get errors unless I create a user named 'joe' on the server. I don't have to create a matching password or anything....the local user's password is always blank, so I do know that the authentication is actually happening against the AD. Here's my smb.conf file: [global] workgroup = <mydomain> server string = <snip> netbios name = HOME security = ADS realm = <mydomain.com> password server = <snip> auth methods = winbind log level = 1 log file = /var/log/samba/%m.log [amore] path = /var/www/amore browseable = yes writable = yes valid users = DOMAIN\user1 DOMAIN\user2 DOMAIN\user3 DOMAIN\user4 I would assume that my kerberos settings are fine, as I've joined the domain and can use wbinfo to see users and groups. However, I can provide that info if necessary. Anyone have any ideas?

    Read the article

  • Windows 7 product key, which is the valid one - in registry or on a sticker?

    - by me how
    I am not too familiar with the software licensing and how this all works, but I have a question regarding Windows 7 and partially Office - generally Microsoft products. I have been asked to assist our IT guy who wants to collect all the product IDs for Windows 7 and Office. I haven't been given much details how to go about it and how to collect it. After a bit of research I have decided to use a freeware that pulls the software licenses out of the registry. I thought that was the easiest and would provide the most accurate product IDs. I've used Belrac Avisor to obtain all the informations. It turns out that about 25 machines use the same product key. I have asked if the company has bought a commercial license or something but there isn't anyone available at the moment who could answer my question. I have told the IT guy that there are 25 machines using the same product key and asked if that is alright. He told me to go around and write the product keys from the sticker(label) on each machine. I am just not quite sure if that's the right approach specially that the numbers do not match.... So, now I see that the numbers aren't matching and my question is in terms of software licensing which is the VALID and correct product key to provide if ever questioned about software license? Is it the number on the sticker or is it the number stored in the registry?

    Read the article

  • Searching for just files

    - by M Schenkel
    I have a couple questions about searching for files on Windows 7. I find the XP method much easier than this new Windows 7 search. Note: I am only concerned about finding files matching a search term, not ALL files containing the search term. Is there a way to search just for files? When I use the search it seems to be searching "within" files and returning instances where the name of the file is used. Example: I have a whole web directory and want to find the javascript files. But if I enter "myjavascript.js" in the search, it also returns all the html files which reference the javascript file. This is both slow and difficult to actually find the reference to the file. Is there a way to search for an exact match? The search seems to implicitly use wildcards. For instance, say I have a bunch of files in a folder: file1.txt,file11.txt, file12.txt, file13.txt. If I enter "file1.txt" in the searcher it returns instances as if I were using a wild card file1*.txt I miss XP!!!!

    Read the article

  • Windows Displays Double the Actual Installed Physical Memory

    - by Andrew Barber
    I have a server I've installed Windows Web 2008 R2 on, which is reporting that I have double the physical memory installed as is actually the case. In msinfo32 "Installed Physical Memory" shows as 2x what ever the actual installed amount is, though "Total Physical Memory" shows the correct amount. The "System" info window shows installed memory as 2x, with the correct amount in parenthesis listed as the "usable" amount). This server mistakenly had Windows Web 2008 (32-bit) installed on it just previously, and that OS also reported the same faulty information as Win2K8R2 is reporting. BIOS reports the correct amount, memtest was run on this server before installation, and a previous Windows 2000 instance installed on this system also reported the correct amount, as I recall. Server operation seems to be fine as well (it's only trying to use the correct amount of memory). The server is a generic pizzabox running on a SuperMicro X6DVL-EG with dual Xeon-3.2's. Memory installed are 4 matching mt18vddf12872g-335c3 sticks (1GB pc2700 DDR ECC REG cl2.5) This behavior occurs whether two or all four are installed. So, has anyone seen something like this before? Have any idea about what's causing it, and how I should be concerned about it? Everything else seems good so far, and I'll be upgrading the memory before putting the server into service, but I don't want to spend too much time/money/effort on the server if it's got something odd going wrong here. UPDATE: There was a question I ran into regarding memory sparing in the BIOS and a possible (buggy) effect thereof; however, flipping that bit back and forth in the BIOS revealed that isn't the issue. Still flummoxed a bit about this one, though I still have seen no negative impacts. Post-Answer Update (January 13, 2011): Upgrading the system with new, larger memory has fixed this issue.

    Read the article

  • Join multiple consecutive SQLite database dump files into 1 common database? Purpose: Search through ENTIRE Chrome Browsing History

    - by porg
    Google Chrome 's default web browsing history search engine only lets you access the records of the recent 100 days. Nevertheless in your application data, Chrome keeps your entire browsing history in SQLite database files, with the file naming scheme of "History Index YYYY-MM". I am looking for a way to search… …through my entire browsing history, …with sophisticated filters (limit search terms to certain fields such as URL, domain, title, body text; wildcard or regex terms, date ranges). … in … …either some ready-made software. eHistory came close, as it can limit terms to fields, but it lacks wildcards/regexes, and has the same limited time horizon as the default search. Beyond that, I could not find any suited Chrome extension or standalone (Mac) app. …or a command line to join multiple SQLite database files into one database, which I can then query (with the full syntax power). In the spirit of the pseudo code below: Preferred this way: sqlite --targetDatabase ChromeHistoryAll --importFiles /path/to/ChromeAppData/History\ Index* --importOnlyYetUnknownFiles Or if my desired feature --importOnlyYetUnknownFiles is not possible (feature could also be called "avoid duplicate imports by checking UIDs"), then by explicitly only importing files, of which I know, that they have yet not been imported into the ChromeHistoryAll database: cd ChromeAppData; sqlite --databaseTarget ChromeHistoryAll --importFiles YetNotImported1 YetNotImported2 YetNotImported3 All my queries I would then perform in the database "ChromeHistoryAll" P.S.: Additional question of general interest: Is there a way to perform a database query in a temporary database which was created on-the-fly from multiple files? Like: sqlite --query="SQL query" --targetDatabase DbAll --DBtemporaryInRAM --importFiles db1 db2 db3 This is surely not applicable for my Chrome question, as these History Index files have a combined file size of 500MB together, thus such a query would be of bad performance. But it could come handy in other situations.

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >