Search Results

Search found 9845 results on 394 pages for 'ntp servers'.

Page 351/394 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • moving audio over a local network using GStreamer

    - by James Turner
    I need to move realtime audio between two Linux machines, which are both running custom software (of mine) which builds on top of Gstreamer. (The software already has other communication between the machines, over a separate TCP-based protocol - I mention this in case having reliable out-of-band data makes a difference to the solution). The audio input will be a microphone / line-in on the sending machine, and normal audio output as the sink on the destination; alsasrc and alsasink are the most likely, though for testing I have been using the audiotestsrc instead of a real microphone. GStreamer offers a multitude of ways to move data round over networks - RTP, RTSP, GDP payloading, UDP and TCP servers, clients and sockets, and so on. There's also many examples on the web of streaming both audio and video - but none of them seem to work for me, in practice; either the destination pipeline fails to negotiate caps, or I hear a single packet and then the pipeline stalls, or the destination pipeline bails out immediately with no data available. In all cases, I'm testing on the command-line just gst-launch. No compression of the audio data is required - raw audio, or trivial WAV, uLaw or aLaw encoding is fine; what's more important is low-ish latency.

    Read the article

  • BITS, TakeOwnership, and Kerberos / Windows Integrated Authentication

    - by Charlie Flowers
    We're using BITS to upload files from machines in our retail locations to our servers. BITS will stop transferring a file if the user who owns the BITS job logs off. Therefore, we're using a Windows Service running as LocalSystem to submit the jobs to BITS and be the job owner. This allows transfers to continue 24/7. However, it raises a question about authentication. We want the BITS server extensions in IIS to use Kerberos to authenticate the client machine. As far as I can tell, that leaves us with only 2 options, both of which are not ideal: Either we create an "ImageUploader" account and store its username/password in a config file that the Windows Service uses as credentials for the BITS job, or we ask the logged on user who creates the BITS job for his password, and then use his credentials for the BITS job. I guess the third option is not to use Kerberos, and maybe go with Basic Auth plus SSL. I'm sure I'm wrong and there's a better option. Is there? Thanks in advance.

    Read the article

  • Perl Script in background

    - by himz
    I have a perl script i made to automatically telnet into different servers . but its interface is only command line. To make it more user friendly for general windows users , i need to make GUI for it . My idea is to make GUI in a language like VB,java ,etc and let that call perl script . my script will run in background in a command prompt and whatever the result it displays back in GUI. Got some success. GUI in vb ,I run an instance of CMD in background ,run perl script in that .But that is wer program fails .As perl script runs in a thread for perl , i only get the output when script completes(rather say when it timeout). i need a mechanism where i can interact with the perl script , take output of script and show to user , then take input from user and so on . Please can you all suggest me some way to actually make this happen. PS: No limitation on using any language for GUI (as core work is done by perl script,GUI only there to give appropriate commands to script) Thanks in advance

    Read the article

  • php json jquery and select box

    - by user253530
    I have this php code $jsonArray = array(); $sql = "SELECT ID,CLIENT FROM PLD_SERVERS"; $result = mysql_query($sql); while($row = mysql_fetch_array($result)) { $jsonArray[] = array('id'=>$row['ID'],'client'=>$row['CLIENT']); } echo json_encode($jsonArray); And this js function autosearchLoadServers() { $.post("php/autosearch-load-servers.php",function(data){ var toAppend = ""; for(var i = 0; i < data.length; i++){ toAppend += '<option value = \"' + data[i].id + '\">' + data[i].client + '</option>'; } $("#serverSelect").empty(); $("#serverSelect").html(toAppend); }); } The problem is that i get only undefined values. How can this be? The values are in the JSON, i checked using firebug in mozilla so there has to be something with the data variable but i can't understand what. I tried different ways and no results.

    Read the article

  • Where are the real risks in network security?

    - by Barry Brown
    Anytime a username/password authentication is used, the common wisdom is to protect the transport of that data using encryption (SSL, HTTPS, etc). But that leaves the end points potentially vulnerable. Realistically, which is at greater risk of intrusion? Transport layer: Compromised via wireless packet sniffing, malicious wiretapping, etc. Transport devices: Risks include ISPs and Internet backbone operators sniffing data. End-user device: Vulnerable to spyware, key loggers, shoulder surfing, and so forth. Remote server: Many uncontrollable vulnerabilities including malicious operators, break-ins resulting in stolen data, physically heisting servers, backups kept in insecure places, and much more. My gut reaction is that although the transport layer is relatively easy to protect via SSL, the risks in the other areas are much, much greater, especially at the end points. For example, at home my computer connects directly to my router; from there it goes straight to my ISPs routers and onto the Internet. I would estimate the risks at the transport level (both software and hardware) at low to non-existant. But what security does the server I'm connected to have? Have they been hacked into? Is the operator collecting usernames and passwords, knowing that most people use the same information at other websites? Likewise, has my computer been compromised by malware? Those seem like much greater risks. What do you think?

    Read the article

  • Elmah sends error mail on development server, but not on production

    - by Adrian Grigore
    Hi, I am trying to set up Elmah so that it sends me an e-mail when a new error occurs. This works fine on my development server, but on the production server no e-mail is sent. The exception is logged on the production server, it's just the e-mail that does not get sent. Here are my elmah configuration settings: <elmah> <security allowRemoteAccess="yes"/> <errorMail from="<MYGOOGLELOGIN>@googlemail.com" to="<MYGOOGLELOGIN>@googlemail.com" subject="ERROR From Elmah" async="false" smtpPort="587" useSsl="true" smtpServer="smtp.gmail.com" userName="<MYGOOGLELOGIN>@googlemail.com" password="<MYGOOGLEPASSWORD>" /> </elmah> I've tried different mail servers, both local and remote, and I tried both synchronous and asynchronous mail sending but to no avail. Now I don't have the slightest idea how to proceed (apart from debugging Elmah on my production server, which seems like a lot of effort to set up). Please help! Thanks, Adrian Edit: I might also add that I tried switching off the firewall on the production server, but that did not make any difference either.

    Read the article

  • When *not* to use prepared statements?

    - by Ben Blank
    I'm re-engineering a PHP-driven web site which uses a minimal database. The original version used "pseudo-prepared-statements" (PHP functions which did quoting and parameter replacement) to prevent injection attacks and to separate database logic from page logic. It seemed natural to replace these ad-hoc functions with an object which uses PDO and real prepared statements, but after doing my reading on them, I'm not so sure. PDO still seems like a great idea, but one of the primary selling points of prepared statements is being able to reuse them… which I never will. Here's my setup: The statements are all trivially simple. Most are in the form SELECT foo,bar FROM baz WHERE quux = ? ORDER BY bar LIMIT 1. The most complex statement in the lot is simply three such selects joined together with UNION ALLs. Each page hit executes at most one statement and executes it only once. I'm in a hosted environment and therefore leery of slamming their servers by doing any "stress tests" personally. Given that using prepared statements will, at minimum, double the number of database round-trips I'm making, am I better off avoiding them? Can I use PDO::MYSQL_ATTR_DIRECT_QUERY to avoid the overhead of multiple database trips while retaining the benefit of parametrization and injection defense? Or do the binary calls used by the prepared statement API perform well enough compared to executing non-prepared queries that I shouldn't worry about it? EDIT: Thanks for all the good advice, folks. This is one where I wish I could mark more than one answer as "accepted" — lots of different perspectives. Ultimately, though, I have to give rick his due… without his answer I would have blissfully gone off and done the completely Wrong Thing even after following everyone's advice. :-) Emulated prepared statements it is!

    Read the article

  • sp_addlinkedserver on sql server 2005 giving problem

    - by Jit
    I am trying to create a link server of a remote database(both the servers are SQL serve2005). I am able to connect that remote server from my SQL Server management studio. I used the following syntax to create it. EXEC sp_addlinkedserver @server = N'LINKSQL2005', @srvproduct = N'', @provider = N'SQLNCLI', @provstr = N'SERVER=IP Address of remote server ;User ID=XXXXXX;Password=***' I have provided the IP addressntax. and user name and password in the above syntax. The link server is getting created. But when I try to execute a query on it I get the error below. Query Used. select * from LINKSQL2005.<DBName>.dbo.<TableName> OLE DB provider "SQLNCLI" for linked server "LINKSQL2005" returned message "Communication link failure". Msg 10054, Level 16, State 1, Line 0 TCP Provider: An existing connection was forcibly closed by the remote host. Msg 18456, Level 14, State 1, Line 0 Login failed for user 'sa'. OLE DB provider "SQLNCLI" for linked server "LINKSQL2005" returned message "Invalid connection string attribute". Pls help me, where am I making mistake.

    Read the article

  • WMI Query Script as a Job

    - by Kenneth
    I have two scripts. One calls the other with a list of servers as parameters. The second query is designed to execute a WMI query. When I run it manually, it does this perfectly. When I try to run it as a job it hangs forever and I have to remove it. For the sake of space here is the relevant part of the calling script: ProcessServers.ps1 Start-Job -FilePath .\GetServerDetailsLight.ps1 -ArgumentList $sqlsrv,$destdb,$server,$instance GetServerDetailsLight.ps1 param($sqlsrv,$destdb,$server,$instance) $password = get-content C:\SQLPS\auth.txt | convertto-securestring $credentials = new-object -typename System.Management.Automation.PSCredential -argumentlist "DOMAIN\MYUSER",$password [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') $box_id = 0; if ($sqlsrv.length -eq 0) { write-output "No data passed" break } function getinfo { param( [string]$svr, [string]$inst ) "Entered GetInfo with: $svr,$inst" $cs = get-wmiobject win32_operatingsystem -computername $svr -credential $credentials -authentication 6 -Verbose -Debug | select Name, Model, Manufacturer, Description, DNSHostName, Domain, DomainRole, PartOfDomain, NumberOfProcessors, SystemType, TotalPhysicalMemory, UserName, Workgroup write-output "WMI Results: $cs" } getinfo $server $instance write-output "Complete" Executed as a job it will show as 'running' forever: PS C:\sqlps> Start-Job -FilePath .\GetServerDetailsLight.ps1 -ArgumentList DBSERVER,LOGDB,SERVER01,SERVER01 Id Name State HasMoreData Location Command -- ---- ----- ----------- -------- ------- 21 Job21 Running True localhost param($sqlsrv,$destdb,... GAC Version Location --- ------- -------- True v2.0.50727 C:\WINDOWS\assembly\GAC_MSIL\Microsoft.SqlServer.Smo\10.0.0.0__89845dcd8080cc91\Microsoft.SqlServer.Smo.dll getinfo MSDCHR01 MSDCHR01 Entered GetInfo with: SERVER01,SERVER01 The last output I ever get is the 'Entered GetInfo with: SERVER01,SERVER01'. If I run it manually like so: PS C:\sqlps> .\GetServerDetailsLight.ps1 DBSERVER LOGDB SERVER01 SERVER01 The WMI query executes just as expected. I am trying to determine why this is, or at least a useful way to trap errors from within jobs. Thanks!

    Read the article

  • Applets failing to load

    - by Roy Tang
    While testing our setup for user acceptance testing, we got some reports that java applets in our web application would occasionally fail to load. The envt where it was reported was WinXP/IE6, and there were no errors found in the java console. Obviously we'd like to avoid it. What sort of things should we be checking for here? On our local servers, everything seems fine. There's some turnaround time when sending questions to the on-site guy, so I'd look to cover as many possible causes as possible. Some more info: We have multiple applets, in the instance that they fail loading, all of them fail loading. The applet jar files vary in size from 2MB to 8MB. I'm told it seems more likely to happen if the applet isn't cached yet, i.e. if they've been able to load the applets once on a given machine, further runs on that machine go smoothly. I'm wondering if there's some sort of network transfer error when downloading the applets, but I don't know how to verify that. Any advise is welcome!

    Read the article

  • .Net website create directory to remote server access denied

    - by tmfkmoney
    I have a web application that creates directories. The application works fine when creating a directory on the web server, however, it does not work when it tries to create a directory on our remote fileserver. The fileserver and the webserver are in the same domain. I have created a local user in our domain, "DOMAIN\aspnet". The local user is on both servers. I am running my .Net app pool under the domain user. I have also tried using windows impersonate in the web.config to run under the domain user. I have verified that the domain user has full control to the remote directory. In an effort to debug this I have also given the "everyone" full control to the remote directory. In an effort to debug this I have also added the domain user to the administrators group. I have a simple .net test page on the web server to test this. Through the test page I am able to read the directory on the file server and get a list of everything in it. I am not able to upload files or to create directories on the file server. Here's code that works: var path = @"\\fileserver\images\"; var di = new DirectoryInfo(path); foreach (var d in di.GetDirectories()) { Response.Write(d.Name); } Here's code that doesn't work: path = Path.Combine(path, "NewDirectory"); Directory.CreateDirectory(path); Here's the error I'm getting: Access to the path '\fileserver\images\NewDirectory' is denied. I'm pretty stuck on this. Any ideas?

    Read the article

  • Problem consuming Exchange Web Service 2010 with jax-ws metro

    - by Johan Karlberg
    I am trying to consume the Exchange 2010 Web Service interface using JAX-WS. I'm using JAX-WS 2.2 RI (Metro 2.0). 2.1 exhibited the same problem. I am running into trouble with Exchange, which returns "HTTP/1.1 415 Cannot process the message because the content type 'text/xml;charset=utf-8' was not the expected type 'text/xml; charset=utf-8'." as a reponse (2.1 quoted the charset value, otherwise same response). Apparently I need to dictate the exact Content-type header for Exchange to be happy. Is there a way for me to do this without forcing me to manually rebuild the dependency? I currently rely on published maven artifacts, and would like to continue doing this if at all possible. The consuming process is a regular J2SE app, with no containers in sight. I have control of the application and can add pretty much anything required to the applications scope, but can not add out-of-process items like proxy servers. The client classes were generated from local WSDL, but the charset specification is derived from constants declared in the jaxws RI implementation, not the generated code. The resulting HTTP transport is thus handled by the standard http/https client from Sun JRE5 or JRE6.

    Read the article

  • How to terminate a particular Azure worker role instance

    - by Oliver Bock
    Background I am trying to work out the best structure for an Azure application. Each of my worker roles will spin up multiple long-running jobs. Over time I can transfer jobs from one instance to another by switching them to a readonly mode on the source instance, spinning them up on the target instance, and then spinning the original down on the source instance. If I have too many jobs then I can tell Azure to spin up extra role instance, and use them for new jobs. Conversely if my load drops (e.g. during the night) then I can consolidate outstanding jobs to a few machines and tell Azure to give me fewer instances. The trouble is that (as I understand it) Azure provides no mechanism to allow me to decide which instance to stop. Thus I cannot know which servers to consolidate onto, and some of my jobs will die when their instance stops, causing delays for users while I restart those jobs on surviving instances. Idea 1: I decide which instance to stop, and return from its Run(). I then tell Azure to reduce my instance count by one, and hope it concludes that the broken instance is a good candidate. Has anyone tried anything like this? Idea 2: I predefine a whole bunch of different worker roles, with identical contents. I can individually stop and start them by switching their instance count from zero to one, and back again. I think this idea would work, but I don't like it because it seems to go against the natural Azure way of doing things, and because it involves me in a lot of extra bookkeeping to manage the extra worker roles. Idea 3: Live with it. Any better ideas?

    Read the article

  • Modify shell script to monitor/ping multiple ip addresses

    - by Alex
    Alright so I need to constantly monitor multiple routers and computers, to make sure they remain online. I have found a great script here that will notify me via growl(so i can get instant notifications on my phone) if a single ip cannot be pinged. I have been attempting to modify the script to ping multiple addresses, with little luck. I'm having trouble trying to figure out how to ping a down server while the script keeps watching the online servers. any help would be greatly appreciated. I haven't done much shell scripting so this is quite new to me. Thanks #!/bin/sh #Growl my Router alive! #2010 by zionthelion73 [at] gmail . com #use it for free #redistribute or modify but keep these comments #not for commercial purposes iconpath="/path/to/router/icon/file/internet.png" # path must be absolute or in "./path" form but relative to growlnotify position # document icon is used, not document content # Put the IP address of your router here localip=192.168.1.1 clear echo 'Router avaiability notification with Growl' #variable avaiable=false com="################" #comment prefix for logging porpouse while true; do if $avaiable then echo "$com 1) $localip avaiable $com" echo "1" while ping -c 1 -t 2 $localip do sleep 5 done growlnotify -s -I $iconpath -m "$localip is offline" avaiable=false else echo "$com 2) $localip not avaiable $com" #try to ping the router untill it come back and notify it while !(ping -c 1 -t 2 $localip) do echo "$com trying.... $com" sleep 5 done echo "$com found $localip $com" growlnotify -s -I $iconpath -m "$localip is online" avaiable=true fi sleep 5 done

    Read the article

  • django inner redirects

    - by Zayatzz
    Hello I have one project that in my own development computer (uses mod_wsgi to serve the project) caused no problems. In live server (uses mod_fastcgi) it generates 500 though. my url conf is like this: # -*- coding: utf-8 -*- from django.conf.urls.defaults import * # Uncomment the next two lines to enable the admin: from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', url(r'^admin/', include(admin.site.urls)), url(r'^', include('jalka.game.urls')), ) and # -*- coding: utf-8 -*- from django.conf.urls.defaults import * from django.contrib.auth import views as auth_views urlpatterns = patterns('jalka.game.views', url(r'^$', view = 'front', name = 'front',), url(r'^ennusta/(?P<game_id>\d+)/$', view = 'ennusta', name = 'ennusta',), url(r'^login/$', auth_views.login, {'template_name': 'game/login.html'}, name='auth_login'), url(r'^logout/$', auth_views.logout, {'template_name': 'game/logout.html'}, name='auth_logout'), url(r'^arvuta/$', view = 'arvuta', name = 'arvuta',), ) and .htaccess is like that: Options +FollowSymLinks RewriteEngine on RewriteOptions MaxRedirects=10 # RewriteCond %{HTTP_HOST} . RewriteCond %{HTTP_HOST} ^www\.domain\.com RewriteRule (.*) http://domain.com/$1 [R=301,L] AddHandler fastcgi-script .fcgi RewriteCond %{HTTP_HOST} ^jalka\.domain\.com$ [NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*) cgi-bin/fifa2010.fcgi/$1 [QSA,L] RewriteCond %{HTTP_HOST} ^subdomain\.otherdomain\.eu$ [NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*) cgi-bin/django.fcgi/$1 [QSA,L] Notice, that i have also other project set up with same .htaccess and that one is running just fine with more complex urls and views fifa2010.fcgi: #!/usr/local/bin/python # -*- coding: utf-8 -*- import sys, os DOMAIN = "domain.com" APPNAME = "jalka" PREFIX = "/www/apache/domains/www.%s" % (DOMAIN,) # Add a custom Python path. sys.path.insert(0, os.path.join(PREFIX, "htdocs/django/Django-1.2.1")) sys.path.insert(0, os.path.join(PREFIX, "htdocs")) sys.path.insert(0, os.path.join(PREFIX, "htdocs/jalka")) # Switch to the directory of your project. (Optional.) os.chdir(os.path.join(PREFIX, "htdocs", APPNAME)) # Set the DJANGO_SETTINGS_MODULE environment variable. os.environ['DJANGO_SETTINGS_MODULE'] = "%s.settings" % (APPNAME,) from django.core.servers.fastcgi import runfastcgi runfastcgi(method="threaded", daemonize="false") Alan

    Read the article

  • Help Needed With AJAX Script

    - by Brian
    Hello All I am working on an AJAX script but am having difficulties. First, here is the script: var xmlHttp; function GetXmlHttpObject(){ var objXMLHttp=null if (window.XMLHttpRequest){ objXMLHttp=new XMLHttpRequest() } else if (window.ActiveXObject){ objXMLHttp=new ActiveXObject("Microsoft.XMLHTTP") } return objXMLHttp } function retrieveData(){ var jBossServer = new Array(); jBossServer[0] = "d01"; jBossServer[1] = "d02"; jBossServer[2] = "p01"; jBossServer[3] = "p02"; for(var i=0; i<jBossServer.length; i++){ xmlHttp = GetXmlHttpObject(); if (xmlHttp == null){ alert ("Something weird happened ..."); return; } var url="./retrieveData.php"; url = url + "?jBossID=" + jBossServer[i]; url = url + "&sid=" + Math.random(); xmlHttp.open("GET",url,true); xmlHttp.onreadystatechange = updateMemory; xmlHttp.send(null); } } function updateMemory(){ if (xmlHttp.readyState == 4 || xmlHttp.readyState == "complete"){ aggregateArray = new Array(); aggregateArray = xmlHttp.responseText.split(','); for(var i=0; i<aggregateArray.length; i++){ alert(aggregateArray[i]); } return; } } The "retrieveData.php" page will return data that looks like this: d01,1 MB,2 MB,3 MB They relate to my 4 servers d01, d02, p01, and p02 (dev and prod). What I am doing is scraping the memory information from http://127.0.0.1:8080/web-console/ServerInfo.jsp I want to save code so I attempted to put the AJAX xmlHttp call into a loop, but I don't think it is working. All I ever get back is the information for "p02" four times. Am I able to do what I want or do I need four different functions for each server (i.e., xmlHttpServer01, xmlHttpServer02, xmlHttpServer03, xmlHttpServer04)? Thank you all for reading, have a good day. :)

    Read the article

  • Application log aggregation, management and notifications...

    - by Matthew Savage
    I'm wondering what everyone is using for logging, log management and log aggregation on their systems. I am working in a company which uses .NET for all it's applications and all systems are Windows based. Currently each application looks after its own logging and notifications of failures (e.g. if app A fails it will send out its own 'call for help' to an admin). While this current practice works its a bit hacky and hard to manage. I've been trying to find some options for making this work better and I've come up with the following: log4net & Chainsaw (ah, if it works). Logging via log4net or another framework into a central database & rolling our own management tool. Logging to the Windows event log and using MOM or System Center Operations Manager to aggregate and manage each of these servers & their apps. A hand-rolled solution to suck all the log files into one point and work some magic across them. Essentially what we are after is something which can pull log entries all together and allow for some analytics to be run across them, plus use a kind of event based system to, for example, send out a warning email when there have been 30+ warning level logs for an application in the last x minutes. So is there anything I've missed, or something someone else can suggest?

    Read the article

  • How to properly develop and deploy features for existing asp.net applications on IIS

    - by Tomh
    My question actually consists of multiple questions. I'm frequently reading about companies who deploy a small subset of features for a select amount of customers using the live "database". Ruby on Rails and its ecosystem have deployment tools and database migrations to deploy or rollback such features in a live production or staging environment. My question, how is this done for an asp.net (mvc in particular) application? How do you test your newly released features against live data? Do you have any tools to modify the existing database and roll back changes if necessary? Do you make backups before deployment? Update Maybe I should point out that my question is not really clear, getting more answers here will help me phrase the question better. To make it easier I will describe a situation I'm commonly seeing with some of my clients. My clients have large deployments of popular web applications. They do not have staging/QA/testing servers. (yes this is not optimal). The data their apps consist of are images, xml files, user uploads and data in Sql Server. Having a few records, of their production database and a couple of dummy files is not a substitute of testing against real data in my opinion. How would you design a workflow that can create a acceptable environment to mimic a production environment before going live?

    Read the article

  • Looking for a good dev environment for OSGi bundles

    - by Riduidel
    Hi, I'm currently investigating in the field of dev environment for OSGi bundles. My goal is to find a way to develop, test and debug with ease the bundles I'll be coding. Besides, I have some "cultural" requirements. I want to be able to use java continuous integration servers (typically, Hudson) As a consequence of that first requirement, I want to have a repeatable, one-click build process. My typical tool for that is maven. And finally, being long-term Eclipse user, and having the m2eclipse at hand to merge my eclipse env with my maven one, I obviously want to be able to test and debug with that IDE. So far, here are the infos I know I can use (and have already tested) maven-bundle-plugin, maven-ipojo-plugin which both offer clean packaging facilities I have tested maven pax (and eclipse pax) and am not really satisfied with both : maven pax generates a very heavy project, where adding dependencies is very error-prone (the maven pax:import-bundle command line, with all its arguments, is a hell per se) I have taken a look at Karaf, which seems to have some nice direct maven provisionning, but I don't know how to integrate it with my Eclipse, besides using the traditionnal JPDA bridge. However, it seems to be more production-oriented than dev-oriented, and as such may require heavy configuration to fit my need (although the reading of its user manual doesn't revedal that). Have you got any ideas ? Some maven/eclipse plugins ?

    Read the article

  • .NET out of memory troubleshooting

    - by bushman
    After reading a few enlightening articles about memory in the .NET technology, Out of Memory does not refer to physical memory, 597499. I thought I understood why a C# app would throw an out of memory exception -- until I started experimenting with two servers-- both are having 2.5 gigs of ram, windows server 2003 and identical programs running. The only significant difference between the two being one has 7% hard drive storage left and the other more than 50%. The server with 7% storage space left is consistently throwing an out of memory while the other is performing consistently well. My app is a C# web application that process' hundreds of MBs of String object. Why would this difference happen seeing that the most likely reason for the out of memory issue is out of contiguous virtual address space -- What solutions do you guys propose -- and what do you say about the following 1. turn on the 3gb switch to increase the virtual address space -- 2. instead of using one giant string object, break it up into smaller pieces and collect it in a jagged array (here I have to find a way to return to the caller in some other way as right now, the return type is a string) thanks SO

    Read the article

  • PHP $_SERVER['HTTP_HOST'] vs. $_SERVER['SERVER_NAME'], am I understanding the man pages correctly?

    - by Jeff
    I did a lot of searching and also read the PHP $_SERVER man page. Do I have this right regarding which to use for my PHP scripts for simple link definitions used throughout my site? $_SERVER['SERVER_NAME'] is based on your web servers' config file (Apache2 in my case), and varies depending on a few directives: (1) VirtualHost, (2) ServerName, (3) UseCanonicalName, etc. $_SERVER['HTTP_HOST'] is based on the request from the client. Therefore, it would seem to me that the proper one to use in order to make my scripts as compatible as possible would be $_SERVER['HTTP_HOST']. Is this assumption correct? Followup comments: I guess I got a little paranoid after reading this article and noting that someone said "they wouldn't trust any of the $_SERVER vars": http://markjaquith.wordpress.com/2009/09/21/php-server-vars-not-safe-in-forms-or-links/ and also: http://www.php.net/manual/en/reserved.variables.server.php (comment: Vladimir Kornea 14-Mar-2009 01:06) Apparently the discussion is mainly about $_SERVER['PHP_SELF'] and why you shouldn't use it in the form action attribute without proper escaping to prevent XSS attacks. My conclusion about my original question above is that it is "safe" to use $_SERVER['HTTP_HOST'] for all links on a site without having to worry about XSS attacks, even when used in forms. Please correct me if I'm wrong.

    Read the article

  • C# COM Cross Thread problem

    - by user364676
    Hi, we're developing a software to control a scientific measuring device. it provides a COM-Interface defines serveral functions to set measurement parameters and fires an event when it measured data. in order to test our software, i'm implementing a simulation of that device. the com-object runs a loop which periodically fires the event. another loop in the client app should now setup up the com-simulator using the given functions. i created a class for measuring parameters which will be instanciated when setting up a new measurement. // COM-Object public class MeasurementParams { public double Param1; public double Param2; } public class COM_Sim : ICOMDevice { public MeasurementParams newMeasurement; IClient client; public int NewMeasurement() { newMeasurment = new MeasurementParam(); } public int SetParam1(double val) { // why is newMeasurement null when method is called from client loop newMeasurement.Param1 = val; } void loop() { while(true) { // fire event client.HandleEvent; } } } public class Client : IClient { ICOMDevice server; public int HandleEvent() { // handle this event server.NewMeasurement(); server.SetParam1(0.0); } void loop() { while(true) { // do some stuff... server.NewMeasurement(); server.SetParam1(0.0); } } } both of the loops run in independent threads. when server.NewMeasurement() is called, the object on the server is set to a new instance. but in the next function, the object is null again. do the same when handling the server-event, it works perfectly, because the method runs in the servers thread. how to make it work from client-thread as well. as the client is meant to be working with the real device, i cannot modify the interfaces given by the manufactor. also i need to setup measurements independent from the event-handler, which will be fired not regulary. i assume this problem related to multithreaded-COM behavior but i found nothing on this topic.

    Read the article

  • How to setup Lucene search for a B2B web app?

    - by Bill Paetzke
    Given: 5000 databases (spread out over 5 servers) 1 database per client (so you can infer there are 1000 clients) 2 to 2000 users per client (let's say avg is 100 users per client) Clients (databases) come and go every day (let's assume most remain for at least one year) Let's stay agnostic of language or sql brand, since Lucene (and Solr) have a breadth of support The Question: How would you setup Lucene search so that each client can only search within its database? How would you setup the index(es)? Would you need to add a filter to all search queries? If a client cancelled, how would you delete their (part of the) index? (this may be trivial--not sure yet) Possible Solutions: Make an index for each client (database) Pro: Search is faster (than one-index-for-all method). Indices are relative to the size of the client's data. Con: I'm not sure what this entails, nor do I know if this is beyond Lucene's scope. Have a single, gigantic index with a database_name field. Always include database_name as a filter. Pro: Not sure. Maybe good for tech support or billing dept to search all databases for info. Con: Search is slower (than index-per-client method). Flawed security if query filter removed. For Example: Joel Spolsky said in Podcast #11 that his hosted web app product, FogBugz On-Demand, uses Lucene. He has thousands of on-demand clients. And each client gets their own database. His situation is quite similar to mine. Although, he didn't elaborate on the setup (particularly indices); hence, the need for this question. One last thing: I would also accept an answer that uses Solr (the extension of Lucene). Perhaps it's better suited for this problem. Not sure.

    Read the article

  • Speeding up a soap powered website

    - by ChrisRamakers
    Hi all, We're currently looking into doing some performance tweaking on a website which relies heavily on a Soap webservice. But ... our servers are located in Belgium and the webservice we connect to is locate in San Francisco so it's a long distance connection to say the least. Our website is PHP powered, using PHP's built in SoapClient class. On average a call to the webservice takes 0.7 seconds and we are doing about 3-5 requests per page. All possible request/response caching is already implemented so we are now looking at other ways to improved the connection speed. This is the code which instantiates the SoapClient, what i'm looking for now is other ways/methods to improve speed on single requestes. Anyone has idea's or suggestions? private function _createClient() { try { $wsdl = sprintf($this->config->wsUrl.'?wsdl', $this->wsdl); $client = new SoapClient($wsdl, array( 'soap_version' => SOAP_1_1, 'encoding' => 'utf-8', 'connection_timeout' => 5, 'cache_wsdl' => 1, 'trace' => 1, 'features' => SOAP_SINGLE_ELEMENT_ARRAYS )); $header_tags = array('username' => new SOAPVar($this->config->wsUsername, XSD_STRING, null, null, null, $this->ns), 'password' => new SOAPVar(md5($this->config->wsPassword), XSD_STRING, null, null, null, $this->ns)); $header_body = new SOAPVar($header_tags, SOAP_ENC_OBJECT); $header = new SOAPHeader($this->ns, 'AuthHeaderElement', $header_body); $client->__setSoapHeaders($header); } catch (SoapFault $e){ controller('Error')->error($id.': Webservice connection error '.$e->getCode()); exit; } $this->client = $client; return $this->client; }

    Read the article

  • localhost not going to desired VirtualHost

    - by ladaghini
    I have several VirtalHosts set up on my computer. I'd like to visit the site I'm currently working on from a different PC using the my comp's ip address, but every config i've tried keeps taking me to a different virtual host (in fact the first virtualhost I set up on my comp). How do I set up the apache virtualhost configs to ensure that the ip address takes me to the site I want it to. /etc/apache2/sites-available/site-i-want-to-show-up-with-ip-address.conf contains: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerAlias currentsite.com DocumentRoot /path/to/root/of/site-i-want-to-show-up ServerName localhost ScriptAlias /awstats/ /usr/lib/cgi-bin/ CustomLog /var/log/apache2/current-site-access.log combined </VirtualHost> And /etc/apache2/sites-available/site-that-keeps-showing-up.conf contains: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerAlias theothersite.com DocumentRoot /path/to/it <Directory /> Options FollowSymLinks AllowOverride None </Directory> </VirtualHost> I'd appreciate anyone's help. Also, I don't know too much about configuring web servers, and I used tutorials to get the above code.

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >