Search Results

Search found 16355 results on 655 pages for 'lua c connection'.

Page 258/655 | < Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >

  • WebKit "Refused to set unsafe header "content-length"

    - by Paul
    I am trying to implement simple xhr abstraction, and am getting this warning when trying to set the headers for a POST. I think it might have something to do with setting the headers in a separate js file, because when i set them in the <script> tag in the .html file, it worked fine. The POST request is working fine, but I get this warning, and am curious why. I get this warning for both content-length and connection headers, but only in WebKit browsers (Chrome 5 beta and Safari 4). In Firefox, I don't get any warnings, the Content-Length header is set to the correct value, but the Connection is set to keep-alive instead of close, which makes me think that it is also ignoring my setRequestHeader calls and generating it's own. I have not tried this code in IE. Here is the markup & code: test.html: <!DOCTYPE html> <html> <head> <script src="jsfile.js"></script> <script> var request = new Xhr('POST', 'script.php', true, 'data=somedata', function(data) { console.log(data.text); }); </script> </head> <body> </body> </html> jsfile.js: function Xhr(method, url, async, data, callback) { var x; if(window.XMLHttpRequest) { x = new XMLHttpRequest(); x.open(method, url, async); x.onreadystatechange = function() { if(x.readyState === 4) { if(x.status === 200) { var data = { text: x.responseText, xml: x.responseXML }; callback.call(this, data); } } } if(method.toLowerCase() === "post") { x.setRequestHeader("Content-Type", "application/x-www-form-urlencoded"); x.setRequestHeader("Content-Length", data.length); x.setRequestHeader("Connection", "close"); } x.send(data); } else { // ... implement IE code here ... } return x; }

    Read the article

  • How to create an Entity Framework model from an existing SQLite database in Visual Studio 2008?

    - by splattne
    I've installed the System.Data.SQLite ADO.NET Provider from http://sqlite.phxsoftware.com/. I can connect to the database from within Visual Studio, I can open table schemas, views etc. I'd like to use an existing SQLite database to create an Entity Framework model in Visual Studio 2008. When I try to create a new ADO.NET Entity Data Model (.edmx) file using the wizard, the existing SQLite connection is not in the list though. Also, it's not possible to create a SQLite connection because there is no provider for SQLite. It only lists SQL Server, SQL Server file and SQL Server Compact 3.5. Any idea how to solve this problem?

    Read the article

  • ERR_INCOMPLETE_CHUNKED_ENCODING apache 2.4

    - by Bujanca Mihai
    I upgraded my Ubuntu server to 14.04 and Apache 2.4.7. Now my images don't load and console yields net::ERR_INCOMPLETE_CHUNKED_ENCODING. Also, I can sometimes see some of the images load for a little while (1 sec max) and then they disappear. .htaccess RewriteEngine On # Serve the favicon file from img folder RewriteCond %{REQUEST_URI} ^/favicon.ico$ RewriteRule ^(.*)$ /img/$1 [NC,L] # Redirect HTTP traffic to WWW subdomain RewriteCond %{HTTPS} off [NC] RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] # Redirect HTTPS traffic to WWW subdomain RewriteCond %{HTTPS} on [NC] RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^(.*)$ https://www.%{HTTP_HOST}/$1 [R=301,L] # Auto Versioning rules RewriteCond %{REQUEST_FILENAME} !-s RewriteRule ^(.*)\.[\d]+\.(css|js)$ $1.$2 [L] # Default Zend rewrite rules RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] VHost <VirtualHost *:80> ServerAdmin admin@localhost ServerName localhost DocumentRoot /home/mihai/ARTD/www/public/website # Omit this in production environment SetEnv APPLICATION_ENV local <Directory /home/mihai/ARTD/www/public/website > Options Indexes FollowSymLinks MultiViews AllowOverride All #Order deny,allow #Allow from all Require all granted </Directory> <IfModule mod_php5.c> php_value memory_limit 128M php_value upload_max_filesize 20M php_value post_max_size 20M </IfModule> ErrorLog /var/log/apache2/ARTD-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/ARTD-access.log combined </VirtualHost> <IfModule mod_ssl.c> <VirtualHost *:443> ServerAdmin admin@localhost ServerName localhost DocumentRoot /home/mihai/ARTD/www/public/website # Omit this in production environment SetEnv APPLICATION_ENV local <Directory /home/mihai/ARTD/www/public/website > Options Indexes FollowSymLinks MultiViews AllowOverride All #Order deny,allow #Allow from all Require all granted </Directory> <IfModule mod_php5.c> php_value memory_limit 128M php_value upload_max_filesize 20M php_value post_max_size 20M </IfModule> ErrorLog /var/log/apache2/ARTD-ssl-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/ARTD.log combined # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire #<FilesMatch "\.(cgi|shtml|phtml|php)$"> # SSLOptions +StdEnvVars #</FilesMatch> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. #BrowserMatch ".*MSIE.*" \ # nokeepalive ssl-unclean-shutdown \ # downgrade-1.0 force-response-1.0 </VirtualHost> </IfModule> logs Apache/2.4.7 (Ubuntu) PHP/5.5.9-1ubuntu4.3 OpenSSL/1.0.1f (internal dummy connection) 127.0.0.1 - - [25/Aug/2014:13:09:53 +0300] "GET /img/header/top-nav-separator.png HTTP/1.1" 200 462 "https://localhost/art" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.132 Safari/537.36"

    Read the article

  • How do I get autotest (ZenTest) to see my namespaced stuff?

    - by Blaine LaFreniere
    Autotest is supposed to map my tests to a class, I believe. When I have class Foo and class FooTest, autotest should see FooTest and say, "Hey, this test corresponds to the unit Foo, so I'll look for changes there and re-run tests when changes occur." And that works, however... When I have Foo::Bar and Foo::BarTest, autotest doesn't seem to make the connection, and whenever I edit Foo::Bar, autotest does not re-run Foo::BarTest Am I doing something wrong? EDIT: File structure might be helpful. Here it is: Module and class files: lib/foo.rb lib/foo/bar.rb lib/foo/baz.rb Test files: test/unit/foo/bar.rb test/unit/baz.rb I would think that autotest is able to make the connection between Foo::Bar and Foo::BarTest, but apparently it doesn't.

    Read the article

  • How do I use Google protobuf to communicate over a serial port?

    - by rob
    I'm working on a project that uses RXTX and protobuf to communicate with an application on a development board and I've been running into issues which implies that I'm likely doing things the wrong way. Here's what I currently have for writing the request to the board (the read code is similar): public void write(CableCommandRequest request, OutputStream out) { CodedOutputStream outStream = CodedOutputStream.newInstance(out); request.writeTo(outStreatm); outStream.flush(); } The OutputStream that is used is prepared by RXTX and the development board seems to indicate that data is being received, but it is getting garbled or is otherwise not being understood. There seems to be little documentation on using protobuf over a serial connection so I'm assuming that passing the OutputStream should be sufficient. Is this in fact correct, or is this the wrong way of sending the response over the serial connection?

    Read the article

  • Self referencing symmetrical Hibernate Map Table using @ManyToMany

    - by sammichy
    I have the following class public class ElementBean { private String link; private Set<ElementBean> connections; } I need to create a map table where elements are mapped to each other in a many-to-many symmetrical relationship. @ManyToMany(targetEntity=ElementBean.class) @JoinTable( name="element_elements", joinColumns=@JoinColumn(name="FROM_ELEMENT_ID", nullable=false), inverseJoinColumns=@JoinColumn(name="TO_ELEMENT_ID", nullable=false) ) public Set<ElementBean> getConnections() { return connections; } I have the following requirements When element A is added as a connection to Element B, then element B should become a connection of Element A. So A.getConnections() should return B and B.getConnections() should return A. I do not want to explicitly create 2 records one for mapping A to B and another for B to A. Is this possible?

    Read the article

  • Java JDBC: Reply.fill()

    - by Harv
    Hi, I get the following exception at times: com.ibm.db2.jcc.b.gm: [jcc][t4][2030][11211][3.50.152] A communication error occurred during operations on the connection's underlying socket, socket input stream, or socket output stream. Error location: Reply.fill(). Message: Connection reset. ERRORCODE=-4499, SQLSTATE=08001 The problem is that, the code executes successfully for quite some time and then suddenly I get this exception. However it runs perfrectly when I run the code again. Could some one please tell me what could be wrong and provide me some pointers to resolve this. Thanks, Harveer

    Read the article

  • Difference between Setting.settings and web.config?

    - by Muneeb
    This might sound a bit dumb. I always had this impression that web.config should store all settings which are suspect to change post-build and setting.settings should have the one which may change pre-build. but I have seen projects which had like connection string in setting.settings. Connection Strings should always been in web.config, shouldnt it? I am interested in a design perspective answer. Just a bit of background: My current scenario is that I am developing a web application with all the three tiers abstracted in three separate visual studio projects thus every tier has its own .settings and .config file.

    Read the article

  • groovy multithreading

    - by srinath
    Hi, I'm newbie to groovy/grails. How to implement thread for this code . Had 2500 urls and this was taking hours of time for checking each url. so i decided to implement multi-thread for this : Here is my sample code : def urls = [ "http://www.wordpress.com", "http://67.192.103.225/QRA.Public/" , "http://www.subaru.com", "http://baldwinfilter.com/products/start.html" ] def up = urls.collect { ur - try { def url = new URL(ur) def connection = url.openConnection() if (connection.responseCode == 200) { return true } else { return false } } catch (Exception e) { return false } } For this code i need to implement multi-threading . Could any one please suggest me the code. thanks in advance, sri.

    Read the article

  • Why Won't the WebSocket.onmessage Event Fire?

    - by SumWon
    Hey guys, After toying around with this for hours, I simply cannot find a solution. I'm working on a WebSocket server using "node.js" for a canvas based online game I'm developing. My game can connect to the server just fine, it accepts the handshake and can even send messages to the server. However, when the server responds to the client, the client doesn't get the message. No errors, nothing, it just sits there peacefully. I've ripped apart my code, trying everything I could think of to fix this, but alas, nothing. Here's a stripped copy of my server code. As I said before, the handshake works fine, the server receives data fine, but sending data back to the client does not. var sys = require('sys'), net = require('net'); var server = net.createServer(function (stream) { stream.setEncoding('utf8'); var shaken = 0; stream.addListener('connect', function () { sys.puts("New connection from: "+stream.remoteAddress); }); stream.addListener('data', function (data) { if (!shaken) { sys.puts("Handshaking..."); //Send handshake: stream.write( "HTTP/1.1 101 Web Socket Protocol Handshake\r\n"+ "Upgrade: WebSocket\r\n"+ "Connection: Upgrade\r\n"+ "WebSocket-Origin: http://192.168.1.113\r\n"+ "WebSocket-Location: ws://192.168.1.71:7070/\r\n\r\n"); shaken=1; sys.puts("Handshaking complete."); } else { //Message received, respond with 'testMessage' var d = "testMessage"; var m = '\u0000' + d + '\uffff'; sys.puts("Sending '"+m+"' to client"); var result = stream.write(m, "utf8"); sys.puts(result); /* Result comes as true, meaning that it pushed the data out. Why isn't the client seeing it?!? */ } }); stream.addListener('end', function () { sys.puts("Connection closed!"); stream.end(); }); }); server.listen(7070); sys.puts("Server Started!");

    Read the article

  • aspnet membeship logon page and https redirect

    - by DefSol
    Hello, I have a gridview control that has on onclick event bound to every cell. Once a user clicks on a cell it directs them to a booking confirmation page and passes 3 url variables. This booking page is behind a aspnet membership and thus if the user is not logged in they are served the login page. The login page has a redirect to https connection in the onload event using the IsSecure property. The issue is once the user logs in, then is returned to the booking confirmation page I lose 2 of the url vars. If I remove the https redirect, everything works fine, but the user logs on on a http connection, which is not cool. Appreciate and help thanks Reuben

    Read the article

  • using Spring JdbcTemplate for multiple database operations

    - by Joel Carranza
    I like the apparent simplicity of JdbcTemplate but am a little confused as to how it works. It appears that each operation (query() or update()) fetches a connection from a datasource and closes it. Beautiful, but how do you perform multiple SQL queries within the same connection? I might want to perform multiple operations in sequence (for example SELECT followed by an INSERT followed by a commit) or I might want to perform nested queries (SELECT and then perform a second SELECT based on result of each row). How do I do that with JdbcTemplate. Am I using the right class?

    Read the article

  • MySql timeouts - Should I set autoReconnect=true in Spring application?

    - by George
    After periods of inactivity on my website (Using Spring 2.5 and MySql), I get the following error: org.springframework.dao.RecoverableDataAccessException: The last packet sent successfully to the server was 52,847,830 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. According to this question, and the linked bug, I shouldn't just set autoReconnect=true. Does this mean I have to catch this exception on any queries I do and then retry the transaction? Should that logic be in the data access layer, or the model layer? Is there an easy way to handle this instead of wrapping every single query to catch this?

    Read the article

  • How to determine MS Access field size via OleDb

    - by Andy
    The existing application is in C#. During startup the application calls a virtual method to make changes to the database (for example a new revision may need to calculate a new field or something). An open OleDb connection is passed into the method. I need to change a field width. The ALTER TABLE statement is working fine. But I would like to avoid executing the ALTER TABLE statement if the field is already the appropriate size. Is there a way to determine the size of an MS Access field using the same OleDb connection?

    Read the article

  • NSMutableURLRequest not obeying my timeoutInterval

    - by kubi
    I'm POST'ing a small image, so i'd like the timeout interval to be short. If the image doesn't send in a few seconds, it's probably never going to send. For some unknown reason my NSURLConnection is never failing, no matter how short I set the timeoutInterval. // Create the URL request NSMutableURLRequest *request = [[NSMutableURLRequest alloc] initWithURL:[NSURL URLWithString:@"http://www.tumblr.com/api/write"] cachePolicy:NSURLRequestUseProtocolCachePolicy timeoutInterval:0.00000001]; /* Populate the request, this part works fine */ [NSURLConnection connectionWithRequest:request delegate:self]; I have a breakpoint set on - (void)connection:(NSURLConnection *)connection didFailWithError:(NSError *)error but it's never being triggered. My images continue to be posted just fine, they're showing up on Tumblr despite the tiny timeoutInterval.

    Read the article

  • NodeJS and node-mongodb-native

    - by w1nk
    Just getting started with node, and trying to get the mongo driver to work. I've got my connection set up, and oddly I can insert things just fine, however calling find on a collection produces craziness. var db = new mongo.Db('things', new mongo.Server('192.168.2.6',mongo.Connection.DEFAULT_PORT, {}), {}); db.open(function(err, db) { db.collection('things', function(err, collection) { // collection.insert(row); collection.find({}, null, function(err, cursor) { cursor.each(function(err, doc) { sys.puts(sys.inspect(doc,true)); }); }); }); }); If I uncomment the insert and comment out the find, it works a treat. The inverse unfortunately doesn't hold, I receive this error: collection.find({}, null, function(err, cursor) { ^ TypeError: Cannot call method 'find' of null I'm sure I'm doing something silly, but for the life of me I can't find it...

    Read the article

  • Using chunked encoding in a POST request to an asmx web service on IIS 6 generates a 404

    - by user175869
    Hi, I'm using a CXF client to communicate with a .net web service running on IIS 6. This request (anonymised): POST /EngineWebService_v1/EngineWebService_v1.asmx HTTP/1.1 Content-Type: text/xml; charset=UTF-8 SOAPAction: "http://.../Report" Accept: */* User-Agent: Apache CXF 2.2.5 Cache-Control: no-cache Pragma: no-cache Host: uat9.gtios.net Connection: keep-alive Transfer-Encoding: chunked followed by 7 chunks of 4089 bytes and one of 369 bytes, generates the following output after the first chunk has been sent: HTTP/1.1 404 Not Found Content-Length: 103 Date: Wed, 10 Feb 2010 13:00:08 GMT Connection: Keep-Alive Content-Type: text/html Anyone know how to get IIS to accept chunked input for a POST? Thanks

    Read the article

  • Unity framework - creating & disposing Entity Framework datacontexts at the appropriate time

    - by TobyEvans
    Hi there, With some kindly help from StackOverflow, I've got Unity Framework to create my chained dependencies, including an Entity Framework datacontext object: using (IUnityContainer container = new UnityContainer()) { container.RegisterType<IMeterView, Meter>(); container.RegisterType<IUnitOfWork, CommunergySQLiteEntities>(new ContainerControlledLifetimeManager()); container.RegisterType<IRepositoryFactory, SQLiteRepositoryFactory>(); container.RegisterType<IRepositoryFactory, WCFRepositoryFactory>("Uploader"); container.Configure<InjectedMembers>() .ConfigureInjectionFor<CommunergySQLiteEntities>( new InjectionConstructor(connectionString)); MeterPresenter meterPresenter = container.Resolve<MeterPresenter>(); this works really well in creating my Presenter object and displaying the related view, I'm really pleased. However, the problem I'm running into now is over the timing of the creation and disposal of the Entity Framework object (and I suspect this will go for any IDisposable object). Using Unity like this, the SQL EF object "CommunergySQLiteEntities" is created straight away, as I've added it to the constructor of the MeterPresenter public MeterPresenter(IMeterView view, IUnitOfWork unitOfWork, IRepositoryFactory cacheRepository) { this.mView = view; this.unitOfWork = unitOfWork; this.cacheRepository = cacheRepository; this.Initialize(); } I felt a bit uneasy about this at the time, as I don't want to be holding open a database connection, but I couldn't see any other way using the Unity dependency injection. Sure enough, when I actually try to use the datacontext, I get this error: ((System.Data.Objects.ObjectContext)(unitOfWork)).Connection '((System.Data.Objects.ObjectContext)(unitOfWork)).Connection' threw an exception of type 'System.ObjectDisposedException' System.Data.Common.DbConnection {System.ObjectDisposedException} My understanding of the principle of IoC is that you set up all your dependencies at the top, resolve your object and away you go. However, in this case, some of the child objects, eg the datacontext, don't need to be initialised at the time the parent Presenter object is created (as you would by passing them in the constructor), but the Presenter does need to know about what type to use for IUnitOfWork when it wants to talk to the database. Ideally, I want something like this inside my resolved Presenter: using(IUnitOfWork unitOfWork = new NewInstanceInjectedUnitOfWorkType()) { //do unitOfWork stuff } so the Presenter knows what IUnitOfWork implementation to use to create and dispose of straight away, preferably from the original RegisterType call. Do I have to put another Unity container inside my Presenter, at the risk of creating a new dependency? This is probably really obvious to a IoC guru, but I'd really appreciate a pointer in the right direction thanks Toby

    Read the article

  • Incompatibilities between Indy 9 and Windows Server 2003?

    - by mcmar
    I'm having a problem with a Delphi application on some Windows 2003 servers. It uses a webservice call to connect with another server and transmit data back and forth. As soon as the app gets to the Authenticate method, the app dies. The app has worked for years on previous boxes with Win Server 2003, but it doesn't on freshly built machines. The machines are set up the same way for the most part, but there is clearly some config setting that differs that I'm not able to track down. Also, while the error becomes apparent in the call to Authenticate, packet sniffing proves that nothing ever happens between the app and the server it's trying to contact, which strengthens my thoughts that something is dieing early on in setting up the connection. I can't duplicate the error locally, so I can't step through the app in a debugger either. Any thoughts on why an Indy 9 Delphi web connection might silently fail?

    Read the article

  • How to avoid OLEDB converting "."s into "#"s in column names?

    - by Andrew Miner
    I'm using the ACE OLEDB driver to read from an Excel 2007 spreadsheet, and I'm finding that any '.' character in column names get converted to a '#' character. For example, if I have the following in a spreadsheet: Name Amt. Due Due Date Andrew 12.50 4/1/2010 Brian 20.00 4/12/2010 Charlie 1000.00 6/30/2010 the name of the second column would be reported as "Amt# Due" when read with the following code: OleDbConnection connection = new OleDbConnection( "Provider=Microsoft.ACE.OLEDB.12.0; Data Source={0}; " + "Extended Properties=\"Excel 12.0 Xml;HDR=YES;FMT=Delimited;IMEX=1\""); OldDbCommand command = new OleDbCommand("SELECT * FROM MyTable", connection); OleDbReader dataReader = command.ExecuteReader(); System.Console.WriteLine(dataReader.GetName(1)); I've read through all the documentation I can find and I haven't found anything which even mentions that this will happen. Has anyone run into this before? Is there a way to fix this behavior?

    Read the article

  • Programmatically retrieve disconnected network adapter information in .NET

    - by Soo Wei Tan
    I have an application written in C# that needs to retrieve information like IP address, subnet mask from a disconnected network adapter. I've tried using various methods such as WMI and the .NET NetworkAdapter class but they don't return any useful data when the network adapter is disconnected. I'm pretty sure Windows keeps this information somewhere, since I can apply network settings using netsh and it appears correctly in the Control Panel. One thing that worked for me in XP was to parse the output of the netsh tool and it would return information even for a disconnected adapter. However, this doesn't seem to work on Windows 7. Win XP output: Configuration for interface "Local Area Connection 5" DHCP enabled: No IP Address: 169.254.0.128 SubnetMask: 255.255.255.0 InterfaceMetric: 0 Win7 output: Configuration for interface "Local Area Connection 2" DHCP enabled: No InterfaceMetric: 5 Any ideas?

    Read the article

  • How to gracefully exit SLIME and EMACS

    - by Gregory Gelfond
    Hi All, I have a question regarding how to "gracefully exit SLIME", when I quit Emacs. Here is the relevant portion of my config file: ;; SLIME configuration (setq inferior-lisp-program "/usr/local/bin/sbcl") (add-to-list 'load-path "~/Scripts/slime/") (require 'slime) (slime-setup) ;; configure SLIME to gracefully quit when emacs ;; terminates (defun slime-smart-quit () (interactive) (when (slime-connected-p) (if (equal (slime-machine-instance) "Gregory-Gelfonds-MacBook-Pro.local") (slime-quit-lisp) (slime-disconnect))) (slime-kill-all-buffers)) (add-hook 'kill-emacs-hook 'slime-smart-quit) To my knowledge this should automatically kill SLIME and it's associated processes whenever I exit Emacs. However, every time I exit, I still get the prompt: Proc Status Buffer Command ---- ------ ------ ------- SLIME Lisp open *cl-connection* (network stream connection to 127.0.0.1) inferior-lisp run *inferior-lisp* /usr/local/bin/sbcl Active processes exist; kill them and exit anyway? (yes or no) Can someone shed some insight as to what I'm missing from my config? Thanks in advance.

    Read the article

  • Windows Workflow and sql script in declarative config like InRule

    - by Satish
    We have been using InRule for our Rule needs we have found that it does not scale well and so are investigating the Windows Work Flow. Within InRule we could configure pretty much have any task for example our sql scripts and stored procedures where all part of a separate rule config file, I am wondering if there is a similar functionality within windows work flow where I could just call a declarative task and pass it a bunch of parameters – This task should contain the sql script I would be executing , we should be able to change the script at runtime without recompilation to the WF code. Is this possible in Windows Work flow – How can I accomplish this within work flow. Additionally for sql execution within Work Flow, how does it get the connection string. Should it be passed from the calling program – is passing it as input parameter from the Calling app via the Dictionary object the best way or can the work flow code have visibility to my calling program app.config and get the connection string ?

    Read the article

  • IIS Strategies for Accessing Secured Network Resources

    - by ErikE
    Problem: A user connects to a service on a machine, such as an IIS web site or a SQL Server database. The site or the database need to gain access to network resources such as file shares (the most common) or a database on a different server. Permission is denied. This is because the user the service is running under doesn't have network permissions in the first place, or if it does, it doesn't have rights to access the remote resource. I keep running into this problem over and over again and am tired of not having a really solid way of handling it. Here are some workarounds I'm aware of: Run IIS as a custom-created domain user who is granted high permissions If permissions are granted one file share at a time, then every time I want to read from a new share, I would have to ask a network admin to add it for me. Eventually, with many web sites reading from many shares, it is going to get really complicated. If permissions are just opened up wide for the user to access any file shares in our domain, then this seems like an unnecessary security surface area to present. This also applies to all the sites running on IIS, rather than just the selected site or virtual directory that needs the access, a further surface area problem. Still use the IUSR account but give it network permissions and set up the same user name on the remote resource (not a domain user, a local user) This also has its problems. For example, there's a file share I am using that I have full rights to for sharing, but I can't log in to the machine. So I have to find the right admin and ask him to do it for me. Any time something has to change, it's another request to an admin. Allow IIS users to connect as anonymous, but set the account used for anonymous access to a high-privilege one This is even worse than giving the IIS IUSR full privileges, because it means my web site can't use any kind of security in the first place. Connect using Kerberos, then delegate This sounds good in principle but has all sorts of problems. First of all, if you're using virtual web sites where the domain name you connect to the site with is not the base machine name (as we do frequently), then you have to set up a Service Principal Name on the webserver using Microsoft's SetSPN utility. It's complicated and apparently prone to errors. Also, you have to ask your network/domain admin to change security policy for both the web server and the domain account so they are "trusted for delegation." If you don't get everything perfectly right, suddenly your intended Kerberos authentication is NTLM instead, and you can only impersonate rather than delegate, and thus no reaching out over the network as the user. Also, this method can be problematic because sometimes you need the web site or database to have permissions that the connecting user doesn't have. Create a service or COM+ application that fetches the resource for the web site Services and COM+ packages are run with their own set of credentials. Running as a high-privilege user is okay since they can do their own security and deny requests that are not legitimate, putting control in the hands of the application developer instead of the network admin. Problems: I am using a COM+ package that does exactly this on Windows Server 2000 to deliver highly sensitive images to a secured web application. I tried moving the web site to Windows Server 2003 and was suddenly denied permission to instantiate the COM+ object, very likely registry permissions. I trolled around quite a bit and did not solve the problem, partly because I was reluctant to give the IUSR account full registry permissions. That seems like the same bad practice as just running IIS as a high-privilege user. Note: This is actually really simple. In a programming language of your choice, you create a class with a function that returns an instance of the object you want (an ADODB.Connection, for example), and build a dll, which you register as a COM+ object. In your web server-side code, you create an instance of the class and use the function, and since it is running under a different security context, calls to network resources work. Map drive letters to shares This could theoretically work, but in my mind it's not really a good long-term strategy. Even though mappings can be created with specific credentials, and this can be done by others than a network admin, this also is going to mean that there are either way too many shared drives (small granularity) or too much permission is granted to entire file servers (large granularity). Also, I haven't figured out how to map a drive so that the IUSR gets the drives. Mapping a drive is for the current user, I don't know the IUSR account password to log in as it and create the mappings. Move the resources local to the web server/database There are times when I've done this, especially with Access databases. Does the database have to live out on the file share? Sometimes, it was just easiest to move the database to the web server or to the SQL database server (so the linked server to it would work). But I don't think this is a great all-around solution, either. And it won't work when the resource is a service rather than a file. Move the service to the final web server/database I suppose I could run a web server on my SQL Server database, so the web site can connect to it using impersonation and make me happy. But do we really want random extra web servers on our database servers just so this is possible? No. Virtual directories in IIS I know that virtual directories can help make remote resources look as though they are local, and this supports using custom credentials for each virtual directory. I haven't been able to come up with, yet, how this would solve the problem for system calls. Users could reach file shares directly, but this won't help, say, classic ASP code access resources. I could use a URL instead of a file path to read remote data files in a web page, but this isn't going to help me make a connection to an Access database, a SQL server database, or any other resource that uses a connection library rather than being able to just read all the bytes and work with them. I wish there was some kind of "service tunnel" that I could create. Think about how a VPN makes remote resources look like they are local. With a richer aliasing mechanism, perhaps code-based, why couldn't even database connections occur under a defined security context? Why not a special Windows component that lets you specify, per user, what resources are available and what alternate credentials are used for the connection? File shares, databases, web sites, you name it. I guess I'm almost talking about a specialized local proxy server. Anyway, so there's my list. I may update it if I think of more. Does anyone have any ideas for me? My current problem today is, yet again, I need a web site to connect to an Access database on a file share. Here we go again...

    Read the article

  • PayPal IPN - having trouble accessing session data?

    - by Martin Bean
    Hello, all. I'm having issues with PayPal IPN integration where it seems I cannot get my solution to read session variables. Basically, in my shop module script, I store the customer's details as provided by PayPal to an orders table. However, I also wish to save products ordered in a transaction to a separate table linked by the order ID. However, it's the second part of the script that's not working, where I loop through the products in the session and then save them to the orders_products table. Is there a reason why the session data not being read? The code within shop.php is as follows: if ($paypal->validate_ipn()) { $name = $paypal->ipn_data['address_name']; $street_1 = $paypal->ipn_data['address_street']; $street_2 = ""; $city = $paypal->ipn_data['address_city']; $state = $paypal->ipn_data['address_state']; $zip = $paypal->ipn_data['address_zip']; $country = $paypal->ipn_data['address_country']; $txn_id = $paypal->ipn_data['txn_id']; $sql = "INSERT INTO orders (name, street_1, street_2, city, state, zip, country, txn_id) VALUES (:name, :street_1, :street_2, :city, :state, :zip, :country, :txn_id)"; $smt = $this->pdo->prepare($sql); $smt->bindParam(':name', $name, PDO::PARAM_STR); $smt->bindParam(':street_1', $street_1, PDO::PARAM_STR); $smt->bindParam(':street_2', $street_2, PDO::PARAM_STR); $smt->bindParam(':city', $city, PDO::PARAM_STR); $smt->bindParam(':state', $state, PDO::PARAM_STR); $smt->bindParam(':zip', $zip, PDO::PARAM_STR); $smt->bindParam(':country', $country, PDO::PARAM_STR); $smt->bindParam(':txn_id', $txn_id, PDO::PARAM_INT); $smt->execute(); // save products to orders relationship $order_id = $this->pdo->lastInsertId(); // $cart = $this->session->get('cart'); $cart = $this->session->get('cart'); foreach ($cart as $product_id => $item) { $quantity = $item['quantity']; $sql = "INSERT INTO orders_products (order_id, product_id, quantity) VALUES ('$order_id', '$product_id', '$quantity')"; $res = $this->pdo->query($sql); } $this->session->del('cart'); mail('[email protected]', 'IPN result', 'IPN was successful on wrestling-wear.com'); } else { mail('[email protected]', 'IPN result', 'IPN failed on wrestling-wear.com'); } And I'm using the PayPal IPN class for PHP as found here: http://www.micahcarrick.com/04-19-2005/php-paypal-ipn-integration-class.html, but the contents of the validate_ipn() method is as follows: public function validate_ipn() { $url_parsed = parse_url($this->paypal_url); $post_string = ''; foreach ($_POST as $field => $value) { $this->ipn_data[$field] = $value; $post_string.= $field.'='.urlencode(stripslashes($value)).'&'; } $post_string.= "cmd=_notify-validate"; // append IPN command // open the connection to PayPal $fp = fsockopen($url_parsed[host], "80", $err_num, $err_str, 30); if (!$fp) { // could not open the connection. If logging is on, the error message will be in the log $this->last_error = "fsockopen error no. $errnum: $errstr"; $this->log_ipn_results(false); return false; } else { // post the data back to PayPal fputs($fp, "POST $url_parsed[path] HTTP/1.1\r\n"); fputs($fp, "Host: $url_parsed[host]\r\n"); fputs($fp, "Content-type: application/x-www-form-urlencoded\r\n"); fputs($fp, "Content-length: ".strlen($post_string)."\r\n"); fputs($fp, "Connection: close\r\n\r\n"); fputs($fp, $post_string . "\r\n\r\n"); // loop through the response from the server and append to variable while (!feof($fp)) { $this->ipn_response.= fgets($fp, 1024); } fclose($fp); // close connection } if (eregi("VERIFIED", $this->ipn_response)) { // valid IPN transaction $this->log_ipn_results(true); return true; } else { // invalid IPN transaction; check the log for details $this->last_error = 'IPN Validation Failed.'; $this->log_ipn_results(false); return false; } }

    Read the article

< Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >