Search Results

Search found 16467 results on 659 pages for 'request filtering'.

Page 420/659 | < Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >

  • Auto login CISCO VPN client on linux [closed]

    - by user70704
    Hi, I have installed Cisco vpn client on my linux system (Fedora core 8). After login, every time, i need to run the following command VPNC to connect the VPN server. VPNC command request the input data from the user, IPSec gateway : IPSec ID: IPSec Secret: Username: Password: So, my requirement is, can i connect the VPN server through any single command?. I feel so lazy to enter the above requirements at every time. I want to connect the VPN Server on boot startup. I was try using expect script, but i can't. Thanks in advance.

    Read the article

  • Retrieving database column using JSON [migrated]

    - by arokia
    I have a database consist of 4 columns (id-symbol-name-contractnumber). All 4 columns with their data are being displayed on the user interface using JSON. There is a function which is responisble to add new column to the database e.g (countrycode). The coulmn is added successfully to the database BUT not able to show the new added coulmn in the user interface. Below is my code that is displaying the columns. Can you help me? table.php $(document).ready(function () { // prepare the data var theme = getDemoTheme(); var source = { datatype: "json", datafields: [ { name: 'id' }, { name: 'symbol' }, { name: 'name' }, { name: 'contractnumber' } ], url: 'data.php', filter: function() { // update the grid and send a request to the server. $("#jqxgrid").jqxGrid('updatebounddata', 'filter'); }, cache: false }; var dataAdapter = new $.jqx.dataAdapter(source); // initialize jqxGrid $("#jqxgrid").jqxGrid( { source: dataAdapter, width: 670, theme: theme, showfilterrow: true, filterable: true, columns: [ { text: 'id', datafield: 'id', width: 200 }, { text: 'symbol', datafield: 'symbol', width: 200 }, { text: 'name', datafield: 'name', width: 100 }, { text: 'contractnumber', filtertype: 'list', datafield: 'contractnumber' } ] }); }); data.php <?php #Include the db.php file include('db.php'); $query = "SELECT * FROM pricelist"; $result = mysql_query($query) or die("SQL Error 1: " . mysql_error()); $orders = array(); // get data and store in a json array while ($row = mysql_fetch_array($result, MYSQL_ASSOC)) { $pricelist[] = array( 'id' => $row['id'], 'symbol' => $row['symbol'], 'name' => $row['name'], 'contractnumber' => $row['contractnumber'] ); } echo json_encode($pricelist); ?>

    Read the article

  • Auto backup a user folder to a usb when usb is plugged in

    - by Azztech Computers
    I'm a computer technician and help customers everyday with their computers and smartphones and have a really basic (i think) request but dont know how to go about it. Customer always come in with broken phones, water damage, needing updates, or just want me to backup their information. I currently have a program that i use when i backup their computers it backups their iOS folder C:\Users\USER\AppData\Roaming\Apple Computer\MobileSync\Backup but what i want is a quick easy way to do this in customers houses. What i require is a way when i plug in a USB drive it AUTOMATICALLY searches for the folder and starts transferring the folder to a predefined folder on the USB drive. This was I can just plug it in and begin work on their computer or phone without the risk of losing their information. I'm sure there is a .bat/.ini file i could use but wondering if someone has already done this or something similar as I would need it to search all the USER folders not just the one I'm logged into. Thanks in advance

    Read the article

  • A Web Service to collect data from local servers every hour

    - by anilerduran
    I'm trying to find a way to collect data from different servers around the world. Here are the details: There is only one single PowerShell script on servers that encrypts data (simple csv file) and sends with preferred method (HTTP/HTTPS Post could be) There is no more control on that servers. Can't install any service, process etc. Just I can configure script to execute every hour. This script also will have encrypted username/password/license key for every server. Script will compress data and send to me with these information. So I need a service (I'm not sure if Web Service is the rigth solution) on the cloud that will help me to: Will get data that is sent from servers using a method. Will authenticate request to recognize sender using license key/username/password and most importantly, Will redirect/send this filecab to my SQL Server on the cloud (Azure). Also it should seperate data according to customer information in license key. So every data for every customer will be stored in dedicated DB/Tables on my SQL All the processes above should be completed automatically. No way for manual steps. Question: A Web Service (SOAP or Restful) is the rigth solution for that?

    Read the article

  • Postfix sending mail back to itself? (Ubuntu 9.10)

    - by webo
    I setup Dovecot and Postfix using the "Dovecot-Postfix" package with SASL and all that. The Dovecot part seems to be working fine but I'm having issues with Postfix. Whenever I send a message to another address through the postfix server, two things happen. the message never gets to the other address (even when I request a delivery notification, it says that it's been delivered but it's not in the spam box in the other inbox or anything) The message comes back to my inbox through Dovecot as though I sent it to myself internally. e.g. I send an email through my postfix server to my gmail account, 10 minutes later nothing shows up in my gmail account but the message comes back to me as though I was sending it to my internal address (with no errors) Any ideas?

    Read the article

  • Rejecting new HTTP requests when server reaches a certain throughput

    - by Sam
    I have a requirement to run an HTTP server that rejects new HTTP requests (with a 503, or similar) when the global transfer rate of current HTTP responses exceeds a certain level. For example, if the web server is transferring at 98Mbps, and a new HTTP request arrives, we would want to reject this (as we couldn't guarantee a good speed). I've had a look at mod_cband for Apache, limit_req for nginx, and lighttpd's rate limiting features, but none of them seem to handle my (rather contrived, granted) use case. I should add that I'm open to using pretty much any web server, and am open to implementing this in iptables rules if someone can craft such a rule! (Refusing the TCP connection is fine, it doesn't have to respond with an HTTP 503). Any suggestions?

    Read the article

  • Deactivate SYN flooding mechanism

    - by mlaug
    I am running a server that is running a service on port 59380. There are more than 1000 machines out there connecting to that service. Once I need to restart the service all those machines are connecting at the same time. That made some trouble as I have seen that log entry in kern.log TCP: Possible SYN flooding on port 59380. *Sending cookies*. Check SNMP counters. So I changed sysctl net.ipv4.tcp_syncookies to 0 because the endpoints to not handle tcp syn cookies correctly. Finally I restarted my network to get the changes in production Next time I had to restart the service, the following message was logged TCP: Possible SYN flooding on port 59380. *Dropping request*. Check SNMP counters. How can I prevent the system for doing such actions? All necessary counter measures are done by iptables...

    Read the article

  • <authentication mode=“Windows”/>

    - by kareemsaad
    I had this error when I browse new web site that site sub from sub http://sharp.elarabygroup.com/ha/deault.aspx ha is new my web site Configuration Error Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS. Source Error: Line 61: ASP.NET to identify an incoming user. Line 62: -- Line 63: Line 64: section enables configuration Line 65: of what to do if/when an unhandled error occurs I note other web site as sub from sub hadn't web.config is that realted with web.config and all subs and domain has one web.config

    Read the article

  • Configure akamai to ignore favicon errors [on hold]

    - by Aki
    We have hosted our services through akamai and have configured and alert in akamai to notify us of 404 errors. We dont want to serve favicon from our services (as they are rest webservices which are not consumed by humans, hence no point in serving favicons). But whenever thesewebservices are accessed from a browser the browser would send a request for the favicon, which ends up being logged as a 404 and akamai sends us an alert for this. Is there a way to configure akamai in a way that it understands that favicon 404s should not contribute to the alert?

    Read the article

  • Ubuntu One - Files API - Cloud - More detailed info somehwere?

    - by Brian McCavour
    I am just starting on a mobile app for Ubuntu One, and I'm reviewing the info at https://one.ubuntu.com/developer/files/store_files/cloud I find the information a bit lacking though. It's a nice reference, but for someone not familiar with it, I had to goggle search to find out what a "volume" was exactly (its kind of obvious, but never hurts to know the specifics) There's also things like: GET /api/file_storage/v1/volumes Return a JSON list of Volume Representations, one for each volume. A volume is a synced folder, or the Ubuntu One folder, owned by the user. Note that all volume paths begin with ~.: ... but there's no such thing as a JSON "list". Does it mean array ? And other things... So I was wondering if here existed another page with more detailed information. Maybe some sample request / responses or something? I could just write a little proof of concept app to answer some of these questions... but I prefer not to unless I have to. Thanks

    Read the article

  • How can I deal with a difficult developer that is holding back the project? [migrated]

    - by ILovePaperTowels
    Our entire project is being held up because of one piece which is being handled by a single developer. When we did finally got the latest version of his code and started reviewing it, we found the code was horrendous! Its a relatively simply workflow, however the code is so complex that it's very difficult to step through and review/debug. The developer responsible has a hard time accepting any kind of criticism, and feels he is more knowledgeable than others members of the team. It's difficult to even talk to him about his development work because it turns into "I know what I'm talking about and you're just wrong!" type of conversation. A request has already been put in to replace this developer but management is not doing anything. This is probably because devs are in short supply where we are, and this is a corporation has a lot of office drama. I'm just one of the developers, not the project manager, however I really want to see this project succeed. What can I do in this sort of situation to try and keep the project on track?

    Read the article

  • R12 Diagnostic Script for Purchasing Encumbrance Issues

    - by Oracle_EBS
    Do you have a Release 12 Purchasing document with an accounting encumbrance error?  Get all the relevant data in one step using the new diagnostic in DOC ID: 1483743.1 -  ‘R12: Diagnostic Script to help troubleshoot Purchasing Encumbrance Issues’.   Avoid the back and forth pinging with support for data collection.   Query the document id in My Oracle Support and add it to your Favorites using the star icon for quick access. The note includes when to use the script and how to use it.  The script will produce a user friendly html output that contains information relevant to encumbrance issues, along with some data validation checks to identify common data corruption issues on your document.  For example in this one diagnostic it will provide information on the following: Ø Cross Product Setup Ø Document Data Dump Ø Funds availability Ø Subledger accounting information Ø GL and AP Invoice Data Ø Debug and Trace This output is ideal for self service, as it provides known issues in the Data Validation section (related to the document) with links to key documentation.   Or the report can be uploaded to support when logging a Service Request. To see more about the diagnostic, attend our September 11, 2012 Webcast ‘Overview of Procurement Patching and New Tools for Issue Resolution’.  Visit Doc ID 1479718.1 to signup.  Note: This topic will not be listed as it has been just added.

    Read the article

  • Unity , libgdx, or something else to develop my first game for Android?

    - by capcom
    I want to start by saying that I absolutely love Unity (even more when I team it up with Blender). I really want to start developing games for Android, but it seems like Unity poses way too many roadblocks in terms of which devices it supports (and even if it does support them, it doesn't work well on all of them). I've been looking around for alternatives, and found something called libgdx. Well, it's nothing like Unity unfortunately, but at least it seems like I may be able to reach a larger audience in the market. I'd like to start by making 2D games, but with 3D graphics (say, imported from Blender). I can do this very easily in Unity, and it seems like it should be alright with libgdx too. But I really want to know if ditching Unity is a smart idea, considering how comfortable I am with it already, and how much I like it. Finally, is libgdx something you would recommend considering my requirements/situation? BTW, I am quite familiar with Eclipse too. Many thanks. Feel free to request further details.

    Read the article

  • When using ssh with priv/pub keys, how to connect to the destination using a user different from the origin machine?

    - by lpacheco
    I need to connect to hostB using user2 from hostA where I´m connected using user1. I've run ssh-keygen -t rsa on hostA and copied the public key generated in ~/.ssh/id_rsa.pub to the ~/.ssh/authorized_keys of user2 in hostB. Then I tried to connect from hostA to hostB using the command: $user1@hostA> ssh user2@hostB I still get a request for password: user2@hostB's password: If I try to connect using the same user on both hosts, it works correctly: $user1@hostA> ssh user1@hostB Enter passphrase for key '/home/user1/.ssh/id_rsa': What am I missing?

    Read the article

  • Using a random string to authenticate HMAC?

    - by mrwooster
    I am designing a simple webservice and want to use HMAC for authentication to the service. For the purpose of this question we have: a web service at example.com a secret key shared between a user and the server [K] a consumer ID which is known to the user and the server (but is not necessarily secret) [D] a message which we wish to send to the server [M] The standard HMAC implementation would involve using the secret key [K] and the message [M] to create the hash [H], but I am running into issues with this. The message [M] can be quite long and tends to be read from a file. I have found its very difficult to produce a correct hash consistently across multiple operating systems and programming languages because of hidden characters which make it into various file formats. This is of course bad implementation on the client side (100%), but I would like this webservice to be easily accessible and not have trouble with different file formats. I was thinking of an alternative, which would allow the use a short (5-10 char) random string [R] rather than the message for autentication, e.g. H = HMAC(K,R) The user then passes the random string to the server and the server checks the HMAC server side (using random string + shared secret). As far as I can see, this produces the following issues: There is no message integrity - this is ok message integrity is not important for this service A user could re-use the hash with a different message - I can see 2 ways around this Combine the random string with a timestamp so the hash is only valid for a set period of time Only allow each random string to be used once Since the client is in control of the random string, it is easier to look for collisions I should point out that the principle reason for authentication is to implement rate limiting on the API service. There is zero need for message integrity, and its not a big deal if someone can forge a single request (but it is if they can forge a very large number very quickly). I know that the correct answer is to make sure the message [M] is the same on all platforms/languages before hashing it. But, taking that out of the equation, is the above proposal an acceptable 2nd best?

    Read the article

  • apache - subdomains are slower?

    - by matthewsteiner
    Using apache benchmark, I ran the exact same php application at my root domain and a subdomain. Even with multiple tests and high request numbers, the requests per second perform extremely differently. I mean something like this: example.com - 1200 requests per second bench.example.com - 50 requests per second What could be affecting this? These aren't using databases or anything, just mainly getting a simple page displaying. But it's the same app for both of them, and I'm wondering why they perform so differently. Ideas?

    Read the article

  • Wireshark Not Displaying Packets From Other Network Devices, Even in Promisc Mode

    - by eb80
    System Setup: 1. MacBook running Mountain Lion. 2. Wireshark installed and capturing packets (I have "capture all in promiscuous mode" checked) 3. I filter out all packets with my source and destination IP using the following filter ("ip.dst != 192.168.1.104 && ip.src != 192.168.1.104") 4. On the same network as the MacBook, I use an Android device (connecting via WiFi) to make HTTP requests. Expected Results: 1. Wireshark running on the MacBook sees the HTTP request from the Android device. Actual Results: 1. I only see SSDP broadcasts from 192.168.1.1 Question: What do I need to do so that Wireshark, like Firesheep, can see and use the packets (particularly HTTP) from other network devices on the same network?

    Read the article

  • How far should one take e-mail address validation?

    - by Mike Tomasello
    I'm wondering how far people should take the validation of e-mail address. My field is primarily web-development, but this applies anywhere. I've seen a few approaches: simply checking if there is an "@" present, which is dead simply but of course not that reliable. a more complex regex test for standard e-mail formats a full regex against RFC 2822 - the problem with this is that often an e-mail address might be valid but it is probably not what the user meant DNS validation SMTP validation As many people might know (but many don't), e-mail addresses can have a lot of strange variation that most people don't usually consider (see RFC 2822 3.4.1), but you have to think about the goals of your validation: are you simply trying to ensure that an e-mail address can be sent to an address, or that it is what the user probably meant to put in (which is unlikely in a lot of the more obscure cases of otherwise 'valid' addresses). An option I've considered is simply giving a warning with a more esoteric address but still allowing the request to go through, but this does add more complexity to a form and most users are likely to be confused. While DNS validation / SMTP validation seem like no-brainers, I foresee problems where the DNS server/SMTP server is temporarily down and a user is unable to register somewhere, or the user's SMTP server doesn't support the required features. How might some experienced developers out here handle this? Are there any other approaches than the ones I've listed? Edit: I completely forgot the most obvious of all, sending a confirmation e-mail! Thanks to answerers for pointing that one out. Yes, this one is pretty foolproof, but it does require extra hassle on the part of everyone involved. The user has to fetch some e-mail, and the developer needs to remember user data before they're even confirmed as valid.

    Read the article

  • Google is displaying "Translate this page" based on a previously registered domain inbound links

    - by crnm
    I recently started a new project with a newly registered generic tld domain. As soon as Google started indexing the page, it displayed a "translate this page" in SERP's, which tries to translate the page to the language of a small Eastern European country from the language that the site actually uses. I tried everything to prevent this: language meta headers and attributes, localisation through Google Webmaster Tools...all to no avail - nothing helped. After a couple of weeks I spotted dozens of inbound links popping up in Google Webmaster Tools all coming from that small Eastern European country, from sub-pages that are not active anymore (either sending out 404's or 301's to the main page), and also had been written in that other language. So the domain had been registered before and as it looks, it did got a lot of possibly spam links in that language. I can't even ask the sites where those links should have been to remove them as they are not active anymore physically, just in Google Webmaster Tools and/or internal data masses... Now I'm at a loss about what to do? As my site is pretty new, it does not have many links pointing towards it in my targeted language. So those are probably not enough to convince Google of attaching the right language to it as Google ignores all other signals about the page language. I'm also unsure if I should use the "disavow" tool, or a reconsideration request...or what else to do about this miserable state. I never used these tools before so I don't have any experience with them. Somehow I have to convince Google about the right language of the page and also to not count/apply/whatever all those historical links from the previous owner. (The domain had been deleted without any traces in Google before I registered it) Has anyone here ever dealt with a similar "Translate this page" problem? (I've also looked at this thread: How can I prevent Google mistakenly offering to translate a page? but didn't find a solution there)

    Read the article

  • PHP script unable to email on OpenBSD Apache

    - by MattC
    I have a webserver running OpenBSD 4.7 and PHP 5.2.12 out of the ports tree. There is a small contact page that is supposed to send an email to a specific address. When I fill in the form using a web browser, it sends the AJAX request to the PHP page which claims it worked successfully but there is no email. The maillog is empty as well. I created a small php script that replicates this functionality and when I run it by hand using the "php -f" command, it sends an email without a problem. I think this has to do with being chrooted but I can't seem to get it to work. Furthermore, I can't seem to get PHP to log. I told it to log to /var/www/logs/php_errors.log and restarted but can't get it to send anything to the file. Does anyone have any tips for debugging these sort of things in OpenBSD?

    Read the article

  • When can an FTP server close its passive connections?

    - by Don Kirkby
    Does the FTP protocol allow the server to close any of its passive connections while the client is still connected? Can it tell when the client is finished receiving and then close the connection? I'm including an FTP server in my application using the pyftpdlib Python project. I've got it to work in active and passive mode, but I'm a bit concerned about when it closes its passive connections. I've tried connecting to it with both FileZilla and the default ftp command in Ubuntu, and in both cases, I get a new passive port for every request. That is, if I sit in the root folder and type ls 10 times, I use up 10 ports. This means that I have to allocate a big block of passive ports for the FTP server to use so it won't run out. As soon as the client disconnects, the server releases all the passive connections associated with that client and those ports can be reused. However, a long-running connection could use up a lot of ports.

    Read the article

  • Is it possible to extend a 504 timeout in nginx on a per location basis

    - by codecowboy
    Is it possible to set timeout directives within a location block to prevent nginx returning a 504 from a long running PHP script (PHP-FPM? location /myurlsegment/ { client_body_timeout 1000000; send_timeout 1000000; fastcgi_read_timeout 1000000; } This has no effect when making a request to example.com/myurlsegment. The timeout occurs after approximately 60 seconds. PHP is configured to allow the script to run until completion (set_time_limit(0)) I don't want to set a global timeout for all scripts.

    Read the article

  • nginx: URL rewrites and performance

    - by j0nes
    I have a website where I need to change the URL structure. The old URLs look like /olddir/part1_de.htm, the new ones will look like /newdir/sub/category/anotherpage.htm. There are a lot of URL rewrites I need to do, I assume about 500 distinct rewrites in the end. As my website gets quite a lot of traffic, my main concern is about performance at the moment. My questions are: I assume that for each request, the rewrites block will be parsed and the regex will be evaluated. Am I right? Will there be a performance penalty if I use these rewrites? Can nginx handle this? Are there any "best practices" to follow when doing a lot of rewrites?

    Read the article

  • Is it possible to turn a shortcut to a folder into a menu of items in that folder?

    - by MrVimes
    I vaguely remember seeing something on the internet that showed it was possible to turn any shortcut to a folder into a menu, as long as that shortcut was in the start menu or on the quicklaunch bar (i.e. somewhere that allowed menu functionality) Does anyone know if this is possible? And if so, how to do it? I'd like to be able to do this... With a link in my quicklaunch area... I remember it had something to do with renaming the shortcut with along string of characters placed between '{' and '}'. I realize how picky this request is as I have more or less acheived what I am looking for by placing the 'desktop' toolbar on my start bar. But I'd rather it be an icon in my quicklaunch. Just humour me :)

    Read the article

  • apache domain names are case sensitive

    - by neubert
    The following HTTP request results in a "See the error log for more details; Invalid Value Found For Domain" error: GET / HTTP/1.0 Host: www.MyWebsite.com If I make the hostname all lowercase, however, it works just fine. How can I make Apache case insensitive? Here's my httpd.conf file: <VirtualHost *:80> ServerName mywebsite.com ServerAlias www.mywebsite.com ... </VirtualHost> I tried adding ServerAlias www.MyWebsite.com to that but that didn't help. And in any event, it seems like that's a poor approach anyway since the case can be mixed up in a ton of different ways and trying to account for all of them would result in a huge *.conf file. Any ideas? Thanks!

    Read the article

< Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >