Search Results

Search found 7211 results on 289 pages for 'enable'.

Page 232/289 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • DotNetNuke and Subversion guidelines

    - by David Stratton
    I've Googled, Binged, and here at StackOverflow, looked through the related questions and searched, but I'm not finding what I'm looking for. I've also searched documentation on DNN. What I'm looking for is any guidance (tutorials, blogs, step-by-step instructions for setting up a repository) etc from people who are experienced in using DotNetNuke with SVN. We use SVN for all our source control, and have no problem with standard applications, because we pretty much built the repository and directory structure to work with our processes. This means when we do web sites, in Visual Studio, we do file based web sites, rather than setting them up in the local IIS. It just makes things easier for us. However, with DNN, it appears that even if you get the source code, it is expecting to be set up in the local IIS, which means additional headaches for us. For example, we are moving all of our source code off our local C drives, and onto a shared drive on a server. This is to enable backups in addition to our normal source control. (This was a management decision). So that means that we need to change the virtual web app when we make the move. Has anyone come up with a good way to work around this? Can DNN be set up so that the developer web server in Visual Studio can be used, so that we can treat it just like any normal web app? Am I missing something obvious? Edit - added I'm willing to accept answers like "We tried it and never got it to work", and "It can't be done" as answers. I'm always open to hearing "It can't be done the way you want. You need to change your procedures to match how it works" if necessary. I guess if you've got experience trying this and just couldn't get it to work, I can learn from your experience that way as well, but some detail would be good.

    Read the article

  • Ext JS 4.2.1 loading controller - best practice

    - by Hown_
    I am currently developing a Ext JS application with many views/controlers/... I am wondering myself what the best practice is for loading the JS controllers/views/and so on... currently i have my application defined like this: // enable javascript cache for debugging, otherwise Chrome breakpoints are lost Ext.Loader.setConfig({ disableCaching: false }); Ext.require('Ext.util.History'); Ext.require('app.Sitemap'); Ext.require('app.Error'); Ext.define('app.Application', { name: 'app', extend: 'Ext.app.Application', views: [ // TODO: add views here 'app.view.Viewport', 'app.view.BaseMain', 'app.view.Main', 'app.view.ApplicationHeader', //administration 'app.view.administration.User' ... ], controllers: [ 'app.controller.Viewport', 'app.controller.Main', 'app.controller.ApplicationHeader', //administration 'app.controller.administration.User', ... ], stores: [ // stores in there.. ] }); somehow this forces the client to load all my views and controllers at startup and is calling all init methods of all controllers of course.. i need to load data everytime i chnage my view.. and now i cant load it in my controllers init function. I would have to do something like this i assume: init: function () { this.control({ '#administration_User': { afterrender: this.onAfterRender } }); }, Is there a better way to do this? Or just an other event? Though the main thing i am questioning myself is if it is the best practice to load all the javascript at startup. Wouldnt it be better to only load the controllers/views/... which the client does need right now? Or should i load all the JS at startup? If i do want to load the controllers dynamicly how could i do this? I assume a would have to remove them from my application arrays (views, controllers, stores) and create an instance if i do need it and mby set the view in the controllers init?! What's best practice??

    Read the article

  • DotNetOpenAuth / WebSecurity Basic Info Exchange

    - by Jammer
    I've gotten a good number of OAuth logins working on my site now. My implementation is based on the WebSecurity classes with amends to the code to suit my needs (I pulled the WebSecurity source into mine). However I'm now facing a new set of problems. In my application I have opted to make the user email address the login identifier of choice. It's naturally unique and suits this use case. However, the OAuth "standards" strikes again. Some providers will return your email address as "username" (Google) some will return the display name (Facebook). As it stands I see to options given my particular scenario: Option 1 Pull even more framework source code into my solution until I can chase down where the OpenIdRelyingParty class is actually interacted with (via the DotNetOpenAuth.AspNet facade) and make addition information requests from the OpenID Providers. Option 2 When a user first logs in using an OpenID provider I can display a kind of "complete registration" form that requests missing info based on the provider selected.* Option 2 is the most immediate and probably the quickest to implement but also includes some code smells through having to do something different based on the provider selected. Option 1 will take longer but will ultimately make things more future proof. I will need to perform richer interactions down the line so this also has an edge in that regard. The more I get into the code it does seem that the WebSecurity class itself is actually very limiting as it hides lots of useful DotNetOpenAuth functionality in the name of making integration easier. Andrew (the author of DNOA) has said that the Attribute Exchange stuff happens in the OpenIdRelyingParty class but I cannot see from the DotNetOpenAuth.AspNet source code where this class is used so I'm unsure of what source would need to be pulled into my code in order to enable the functionality I need. Has anyone completely something similar?

    Read the article

  • Is Accessing USB from web application for cross browser cross os possible at all ?

    - by Ved
    Hey Guys, I am wondering if there is anyway we can achieve this. I heard different things about Silverlight 4 , Java Script or Active X control but not seen any demo of code for any of them. Does anyone know any web component that is available or how to write one. We really like capture client's USB drive via Web and read/write data on it. This has to work for ANY Operating system in Any web browser. Thanks UPDATED What about WPF in browser mode...I read that I can host my wpf apps inside browser and sort of like smart client. Here is a great example of doing this via silverlight 4 but author mentions about possibility of accessing USB on MAC via 1) Enable executing AppleScripts. This option will let us have the same amount of control on a mac machine as we do on a windows machine. 2) Add an overload to ComAutomationFactory.CreateObject() that calls the “Tell Application” command under the scenes and gets a AppleScript object. This option would work extremely well for Office automation. For any other operating system feature, you’ll have to code OS access twice.  I did not quite understand it. Has any tried this ?

    Read the article

  • Fastest inline-assembly spinlock

    - by sigvardsen
    I'm writing a multithreaded application in c++, where performance is critical. I need to use a lot of locking while copying small structures between threads, for this I have chosen to use spinlocks. I have done some research and speed testing on this and I found that most implementations are roughly equally fast: Microsofts CRITICAL_SECTION, with SpinCount set to 1000, scores about 140 time units Implementing this algorithm with Microsofts InterlockedCompareExchange scores about 95 time units Ive also tried to use some inline assembly with __asm {} using something like this code and it scores about 70 time units, but I am not sure that a proper memory barrier has been created. Edit: The times given here are the time it takes for 2 threads to lock and unlock the spinlock 1,000,000 times. I know this isn't a lot of difference but as a spinlock is a heavily used object, one would think that programmers would have agreed on the fastest possible way to make a spinlock. Googling it leads to many different approaches however. I would think this aforementioned method would be the fastest if implemented using inline assembly and using the instruction CMPXCHG8B instead of comparing 32bit registers. Furthermore memory barriers must be taken into account, this could be done by LOCK CMPXHG8B (I think?), which guarantees "exclusive rights" to the shared memory between cores. At last [some suggests] that for busy waits should be accompanied by NOP:REP that would enable Hyper-threading processors to switch to another thread, but I am not sure whether this is true or not? From my performance-test of different spinlocks, it is seen that there is not much difference, but for purely academic purpose I would like to know which one is fastest. However as I have extremely limited experience in the assembly-language and with memory barriers, I would be happy if someone could write the assembly code for the last example I provided with LOCK CMPXCHG8B and proper memory barriers in the following template: __asm { spin_lock: ;locking code. spin_unlock: ;unlocking code. }

    Read the article

  • struts2 action not calling properly

    - by Ziplin
    On default I want my struts2 app to forward to an action: <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE struts PUBLIC "-//Apache Software Foundation//DTD Struts Configuration 2.0//EN" "http://struts.apache.org/dtds/struts-2.0.dtd"> <struts> <constant name="struts.enable.DynamicMethodInvocation" value="false" /> <constant name="struts.devMode" value="false" /> <package name="myApp" namespace="/myApp" extends="struts-default"> <action name="Login_*" method="{1}" class="myApp.SessionManager"> <result name="input">/myApp/Login.jsp</result> <result type="redirectAction">Menu</result> </action> </package> <package name="default" namespace="/" extends="struts-default"> <default-action-ref name="index" /> <action name="index"> <result type="redirectAction"> <param name="actionName">Login_input.action</param> <param name="namespace">/myApp</param> </result> </action> </package> </struts> I'm looking for the application to call SessionManager.input(), but instead it calls SessionManager.execute().

    Read the article

  • Why does my javascript file sometimes compressed while sometimes not?(IIS Gzip problem)

    - by Kevin Yang
    i enable gzip for javascript file in my iis settings, here 's the corresponding config section. <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="10" dynamicCompressionLevel="8" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/soap+msbin1" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/javascript" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> currently, when i download my js file, it seems that sometimes server return the gzip one, and sometimes not. i dont know why, and how to debug that. If a file is already gzipped, it should be cached in local disk, and next time someone visit that file again, iis kernel should return the cache gzip file directly without compressing it again. Is that right?

    Read the article

  • Apache proxy to Lighttpd: changing $_SERVER['HTTP_HOST'] in php

    - by watain
    I have a WordPress blog running on lighttpd-1.4.19, listening on at www00:81. On the same host, apache-2.2.11 listens on port 80, which creates a proxy connection from http://blog.mydomain.org:80 to http://blog.mydomain.org:81. The Apache virtualhost looks as follows: <VirtualHost *:80> ServerName blog.mydomain.org ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://blog.mydomain.org:81/ ProxyPassReverse / http://blog.mydomain.org:81/ </VirtualHost> Using debug.log-request-handling = "enable" I get the following log entry when I browse http://blog.mydomain.org:80 (notice the Host headers): 2010-05-10 08:47:14: (request.c.294) fd: 6 request-len: 853 GET / HTTP/1.1 Host: blog.mydomain.org:81 [...] 2010-05-10 08:47:15: (request.c.294) fd: 8 request-len: 754 GET /wp-content/uploads/2010/01/image.gif?w=280 HTTP/1.1 Host: www00:81 My problem: as far as I know, the PHP environment variable $_SERVER['HTTP_HOST'] is set to that Host header variable. Unfortunately, WordPress uses that variable in their system to create URLs to pictures on the blog. These URLs won't be accessible behind a firewall of course. How can I force the host header to be blog.mydomain.org instead of blog.mydomain.org:81, respectively www00:81? I already added set server.name = "blog.mydomain.org" to my lighttpd.conf, but this didn't work. Any suggestions are appreciated, thank you.

    Read the article

  • Google Chrome and (cache or memory leaks).

    - by Alexey Ogarkov
    Hello All, I have a big problem with Google Chrome and its memory. My app is displaying to user several image charts and reloads them every 10s. In the interval i have code like that var image = new Image(); var src = 'myurl/image'+new Date().getTime(); image.onload = function() { document.getElementById('myimage').src = src; image.onload = image.onabort = image.onerror = null; } image.src = src; So i have no memory leaks in Firefox and IE. Here the response headers for images Server Apache-Coyote/1.1 Vary * Cache-Control no-store (// I try no-cache, must-revalidate and so on here) Content-Type image/png Content-Length 11131 Date Mon, 31 May 2010 14:00:28 GMT Vary * taken from here In about:cache page there is no my cached images. If i enable purge-memory-button for chrome (--purge-memory-button parameter) it`s not help. Images is in PNG24. So i think that the problem is not in cache. May be Google Chrome is not releasing memory for old images. Please help. Any suggestions. Thanks.

    Read the article

  • apache web server configuration problem

    - by mohit
    i want to have apache server to serve only /var/www/ directory now it serves all my files on system from directory "/" i tried to edit httpd.conf placed in /etc/apache2 and placed the folllowing content in it(intially it was empty) <Directory /> Options None AllowOverride None </Directory> DocumentRoot "/var/www" <Directory "/var/www"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> then saved it,restarted apache server put the location /var/www in the web browser address bar,still it shows the higher level directories too then i edited the file Default,Default-ssl in the sites-available folder repeated the same process still apache serves all files on my system 2.when i try to use the following command gedit httpd.conf I get the error gedit:2696): EggSMClient-WARNING **: Failed to connect to the session manager: None of the authentication protocols specified are supported GConf Error: Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See http://projects.gnome.org/gconf/ for information. (Details - 1: Failed to get connection to session: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.)

    Read the article

  • php automatically commented with apache

    - by clement
    We have installed apache 2.2, and activeperl to run bugzilla, all that on a Windows Server 2003. Here We want to install PHP on the server to install a wiki. I followed those steps: tutorial to install PHP and enable it from Apache. After all those steps, I restart couples of times, and When I try a simple phpinfo() on PHP, the whole PHP code is commented: < ! - - ?php phpinfo(); ? - - Now, the httpd.conf was already edited for the PERL and it can be those edits that make the mistake. Here is the whole httpd.conf file: ServerRoot "C:/Program Files/Apache Software Foundation/Apache2.2" Listen 6969 LoadModule actions_module modules/mod_actions.so LoadModule alias_module modules/mod_alias.so LoadModule asis_module modules/mod_asis.so LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule php5_module "c:/php/php5apache2_2.dll" LoadModule authn_default_module modules/mod_authn_default.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule cgi_module modules/mod_cgi.so LoadModule dir_module modules/mod_dir.so LoadModule env_module modules/mod_env.so LoadModule include_module modules/mod_include.so LoadModule isapi_module modules/mod_isapi.so LoadModule log_config_module modules/mod_log_config.so LoadModule mime_module modules/mod_mime.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule setenvif_module modules/mod_setenvif.so User daemon Group daemon ServerAdmin [email protected] DocumentRoot C:/bugzilla-4.4.2/ Options FollowSymLinks AllowOverride None Order deny,allow Deny from all Options Indexes FollowSymLinks ExecCGI AllowOverride All Order allow,deny Allow from all ScriptInterpreterSource Registry-Strict DirectoryIndex index.html index.html.var index.cgi index.php Order allow,deny Deny from all Satisfy All ErrorLog "logs/error.log" LogLevel warn LogFormat "%h %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> ScriptAlias /cgi-bin/ "C:/Program Files/Apache Software Foundation/Apache2.2/cgi-bin/" AllowOverride None Options None Order allow,deny Allow from all DefaultType text/plain AddType application/x-compress .Z AddType application/x-gzip .gz .tgz AddHandler cgi-script .cgi AddType application/x-httpd-php .php SSLRandomSeed startup builtin SSLRandomSeed connect builtin PHPIniDir "c:/php"

    Read the article

  • How to use data receive event in Socket class?

    - by affan
    I have wrote a simple client that use TcpClient in dotnet to communicate. In order to wait for data messages from server i use a Read() thread that use blocking Read() call on socket. When i receive something i have to generate various events. These event occur in the worker thread and thus you cannot update a UI from it directly. Invoke() can be use but for end developer its difficult as my SDK would be use by users who may not use UI at all or use Presentation Framework. Presentation framework have different way of handling this. Invoke() on our test app as Microstation Addin take a lot of time at the moment. Microstation is single threaded application and call invoke on its thread is not good as it is always busy doing drawing and other stuff message take too long to process. I want my events to generate in same thread as UI so user donot have to go through the Dispatcher or Invoke. Now i want to know how can i be notified by socket when data arrive? Is there a build in callback for that. I like winsock style receive event without use of separate read thread. I also do not want to use window timer to for polling for data. I found IOControlCode.AsyncIO flag in IOControl() function which help says Enable notification for when data is waiting to be received. This value is equal to the Winsock 2 FIOASYNC constant. I could not found any example on how to use it to get notification. If i am write in MFC/Winsock we have to create a window of size(0,0) which was just used for listening for the data receive event or other socket events. But i don't know how to do that in dotnet application.

    Read the article

  • How to disable server-side caching on IIS 7.5 (asp net mvc3)

    - by troebr
    I'm struggling with my IIS setup regarding caching, here's a brief description of my problem: I'm making a site for mobile and non-mobile, sharing the same controllers. IE: mysite/page will serve either mysite/page.cshtml, or mysite/M/page.cshtml, depending on the device. Here's the catch, it worked fine with my local and integration environment (cassiini and iis 6), but on another machine (2008r2/iis 7.5), apparently there is an aggressive server-side caching policy: If I access the website from a desktop machine, I have the correct pages (desktop version) If now I use my mobile phone to access the site, I will have the desktop version, (which implies a server-side cache, my phone is not using the same network). On the contrary, if I were to restart the server and access the site using my phone first, then I will get the mobile version on my desktop (only for the pages I already visited of course). I tried 2 solutions so far: Disabling OutputCache from my Web.config: <httpModules> [..] <remove name="OutputCache" /> </httpModules> And unchecking "Enable output cache" in "Output Caching" for my site in IIS. What's bugging me is that I do not have this problem with my other server (iis 6.0), although caching is enabled on this one, which leads me to think it is related to iis 7 caching addition. My question is simple: how does one disable server-side caching on IIS 7.5? Thanks in advance for your iis lights!

    Read the article

  • What's the order of execution in property setters when using IDataErrorInfo?

    - by Benny Jobigan
    Situation: Many times with WPF, we use INotifyPropertyChanged and IDataErrorInfo to enable binding and validation on our data objects. I've got a lot of properties that look like this: public SomeObject SomeData { get { return _SomeData; } set { _SomeData = value; OnPropertyChanged("SomeData"); } } Of course, I have an appropriate overridden IDataErrorInfo.this[] in my class to do validation. Question: In a binding situation, when does the validation code get executed? When is the property set? When is the setter code executed? What if the validation fails? For example: User enters new data. Binding writes data to property. Property set method is executed. Binding checks this[] for validation. If the data is invalid, the binding sets the property back to the old value. Property set method is executed again. This is important if you are adding "hooks" into the set method, like: public string PathToFile { get { return _PathToFile; } set { if (_PathToFile != value && // prevent unnecessary actions OnPathToFileChanging(value)) // allow subclasses to do something or stop the setter { _PathToFile = value; OnPathToFileChanged(); // allow subclasses to do something afterwards OnPropertyChanged("PathToFile"); } } }

    Read the article

  • iPhone SpeakHere example produces different number of samples

    - by pion
    I am looking at the SpeakHere example and added the following: // Overriding the output audio route UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker; AudioSessionSetProperty(kAudioSessionProperty_OverrideAudioRoute, sizeof (audioRouteOverride), &audioRouteOverride); assert(status == noErr); // Changing the default output route. The new output route remains in effect unless you change the audio session category. This option is available starting in iPhone OS 3.1. UInt32 doChangeDefaultRoute = 1; status = AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryDefaultToSpeaker, sizeof(doChangeDefaultRoute), &doChangeDefaultRoute); assert(status == noErr); // Enable Bluetooth. See audioRouteOverride & doChangeDefaultRoute above // http://developer.apple.com/iphone/library/documentation/Audio/Conceptual/AudioSessionProgrammingGuide/Cookbook/Cookbook.html#//apple_ref/doc/uid/TP40007875-CH6-SW2 UInt32 allowBluetoothInput = 1; status = AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryEnableBluetoothInput, sizeof(allowBluetoothInput), &allowBluetoothInput); assert(status == noErr); Also, void AQRecorder::handleInputBufferCallback(void *aqData, ... ... if (inNumPackets > 0) { NSLog(@"Number of samples = %d", inBuffer->mAudioDataByteSize/2); // print out the number of sample status = AudioFileWritePackets(aqr->mAudioFile, FALSE, inBuffer->mAudioDataByteSize, inPacketDesc, aqr->mCurrentPacket, &inNumPackets, inBuffer->mAudioData); assert(status == noErr); aqr->mCurrentPacket += inNumPackets; } ... Notice the "NSLog(@"Number of samples = %d", inBuffer-mAudioDataByteSize/2); // print out the number of sample" statement above. I am using iPhone SDK 3.1.3. I got the following results The number of samples is around 44,100 on Simulator The number of samples is around 22,000 on iPhone The number of samples is around 4,000 on iPhone using Jawbone Bluetooth I am new on this. Why did itproduce different number of samples? Thanks in advance for your help.

    Read the article

  • Auditing front end performance on web application

    - by user1018494
    I am currently trying to performance tune the UI of a company web application. The application is only ever going to be accessed by staff, so the speed of the connection between the server and client will always be considerably more than if it was on the internet. I have been using performance auditing tools such as Y Slow! and Google Chrome's profiling tool to try and highlight areas that are worth targeting for investigation. However, these tools are written with the internet in mind. For example, the current suggestions from a Google Chrome audit of the application suggests is as follows: Network Utilization Combine external CSS (Red warning) Combine external JavaScript (Red warning) Enable gzip compression (Red warning) Leverage browser caching (Red warning) Leverage proxy caching (Amber warning) Minimise cookie size (Amber warning) Parallelize downloads across hostnames (Amber warning) Serve static content from a cookieless domain (Amber warning) Web Page Performance Remove unused CSS rules (Amber warning) Use normal CSS property names instead of vendor-prefixed ones (Amber warning) Are any of these bits of advice totally redundant given the connection speed and usage pattern? The users will be using the application frequently throughout the day, so it doesn't matter if the initial hit is large (when they first visit the page and build their cache) so long as a minimal amount of work is done on future page views. For example, is it worth the effort of combining all of our CSS and JavaScript files? It may speed up the initial page view, but how much of a difference will it really make on subsequent page views throughout the working day? I've tried searching for this but all I keep coming up with is the standard internet facing performance advice. Any advice on what to focus my performance tweaking efforts on in this scenario, or other auditing tool recommendations, would be much appreciated.

    Read the article

  • pdf external streams in Max OS X Preview

    - by olpa
    According to the specification, a part of a PDF document can reside in an external file. An example for an image: 2 0 obj << /Type /XObject /Subtype /Image /Width 117 /Height 117 /BitsPerComponent 8 /Length 0 /ColorSpace /DeviceRGB /FFilter /DCTDecode /F (pinguine.jpg) >> stream endstream endobj I found that this functionality does work in Adobe Acrobat 5.0 for Windows (sample PDF with the image), also I managed to view this file in Adobe Acrobat Reader 8.1.3 for Mac OS X after I found the setting "Allow external content". Unfortunately, it seems that non-Adobe tools ignore the external stream feature. I hope I'm wrong, therefore ask the question: How to enable external streams in Mac OS X? (I think that all the system Mac OS X tools use the same library, therefore say "Mac OS X" instead of "Preview".) Or maybe there could be a programming hook to emulate external streams? My task is: store a big set of images (total ˜300Mb) outside of a small PDF (˜1Mb). At some moment, I want to filter PDF through a quartz filter and get a PDF with the images embedded. Any suggestions are welcome.

    Read the article

  • SQL Server 2005 - Enabling both Named Pipes & TCP/IP protocols?

    - by Clinemi
    We have a SQL Server 2005 database, and currently all our users are connecting to the database via the TCP/IP protocol. The SQL Server Configuration Manager allows you to "enable" both Named Pipes, and TCP/IP connections at the same time. Is this a good idea? My question is not whether we should use named pipes instead of TCP/IP, but are there problems associated with enabling both? One of our client's IT guys, says that enabling database communication with both protocols will limit the bandwidth that either protocol can use - to like 50% of the total. I would think that the bandwidth that TCP/IP could use would be directly tied (inversely) to the amount of traffic that Named Pipes (or any of the other types of traffic) were occupying on the network at that moment. However, this IT person is indicating that the fact that we have enabled two protocols on the server, artificially limits the bandwidth that TCP/IP can use. Is this correct? I did Google searches but could not come up with an answer to this question. Any help would be appreciated.

    Read the article

  • Zend_Auth and database session SaveHandler

    - by takeshin
    I have created Zend_Auth adapter implementing Zend_Auth_Adapter_Interface (similar to Pádraic's adapter) and created simple ACL plugin. Everything works fine with default session handler. So far, so good. As a next step I have created custom Session SaveHandler to persist session data in the database. My implementation is very similar to this one from parables-demo. Seems that everything is working fine. Session data are properly saved to the database, session objects are serialized, but authentication does not work when I enable this custom SaveHandler. I have debugged the authentication and all works fine up till the next request, when the authentication data are lost. I suspected, that is has something to do with the fact, that I use $adapter->write($object) instead $adapter->write($string), but the same happens with strings. I'm bootstrapping Zend_Application_Resource_Session in the first Bootstrap method, as early as possible. Does Zend_Auth need any extra configuration to persist data in the database? Why the authentity is being lost?

    Read the article

  • Why do I get "mysql_query(): supplied argument is not a valid"

    - by Brian Ojeda
    Why do I get a "mysql_query(): supplied argument is not a valid" for the first... $r = mysql_query($q, $connection); In the following code... $bId = trim($_POST['bId']); $title = trim($_POST['title']); $story = trim($_POST['story']); $q = "SELECT * "; $q .= "FROM " . DB_NAME . ".`blog` "; $q .= "WHERE `blog`.`id` = {$bId}"; $r = mysql_query($q, $connection); //confirm_query($r); if (mysql_num_rows($r) == 1) { $q = "UPDATE " . DB_NAME . ".`blog` SET `title` = '{$title}', `story` = '{$story}' WHERE `id` = {$bId}"; $r = mysql_query($q, $connection); if (mysql_affected_rows() == 1) { //Successful $data['success'] = true; $date['errors'] = false; $date['message'] = "You are the Greatest!"; } else { //Fail $data['success'] = false; $data['error'] = true; $date['message'] = "You can't do it fool!"; } } I also get an "mysql_num_rows(): supplied argument is not a valid MySQL result resource" error too. Side notes: I am using 1&1 Hosting (worst hosting ever), custom .htaccess file with one line text to enable PHP 5.2 (only why with 1&1 Hosting).

    Read the article

  • Python CGI Premature end of script error depending on script parameters.

    - by nickengland
    I have a python script which should parse a file and produce some output to disk, as well as returning a webpage linking to the outputted files. When run with a file posted from the HTML form I get no HTML output back, just a 500 error page and the error_log contains the line: [Mon Apr 19 15:03:23 2010] [error] [client xxx.xxx.121.79] Premature end of script headers: uploadcml.py, referer: http://xxx.ch.cam.ac.uk:9000/ However, the files which the script should be saving are indeed saved to disk. If I run it without any arguments, the script returns the correct HTML indicating no file was parsed. All the information I have found on the web about Premature end of script headers implies it is due to either a missing header, or lack of permissions on the python script but neither can apply to me. The first lines of the script are: #!/home/nwe23/bin/bin/python import cgitb; cgitb.enable() import cgi import pybel,openbabel import random print "Content-Type: text/html" print so when run, I can see no way for it to fail to output the header, and it DOES output the header when run without a file to parse, but when given a file produces the error(but still parsed the file and saves the output to disk!). Does anyone know how this is happening and what can be done to fix it? I have tried adding wrongly-indented gibberish (such as foobar) at various points in the file, and this results in adding an indent error to the error_log wherever it is, even if its the very last line in the script. The Premature script headers error remains though. Does this mean the script is executing all the way through?

    Read the article

  • Nginx , Apache , Mysql , Memcache with server 4G ram. How optimize to enoigh of memory?

    - by TomSawyer
    i have 1 dedicated server with Nginx proxy for Apache. Memcache, mysql, 4G Ram. These day, my visitor on my site wasn't increased, but my server get overload always in some specified time. (9AM - 15PM) Ram in use is increased second by second to full. that's moment, my server will get overload. i have to kill all apache , mysql service and reboot it to get free memory. and it'll full again. that's the terrible circle. here is my ram in use at the moment 160(nginx) 220(apache) 512(memcache) 924(mysql) here's process number 4(nginx) 14(apache) 5(memcache) 20(mysql) and here's my my.cnf config. someone can help me to optimize it? [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql skip-locking skip-networking skip-name-resolve # enable log-slow-queries log-slow-queries = /var/log/mysql-slow-queries.log long_query_time=3 max_connections=200 wait_timeout=64 connect_timeout = 10 interactive_timeout = 25 thread_stack = 512K max_allowed_packet=16M table_cache=1500 read_buffer_size=4M join_buffer_size=4M sort_buffer_size=4M read_rnd_buffer_size = 4M max_heap_table_size=256M tmp_table_size=256M thread_cache=256 query_cache_type=1 query_cache_limit=4M query_cache_size=16M thread_concurrency=8 myisam_sort_buffer_size=128M # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqldump] quick max_allowed_packet=16M [mysql] no-auto-rehash [isamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [myisamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [mysqlhotcopy] interactive-timeout [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid

    Read the article

  • NHibernate unintential lazy property loading

    - by chiccodoro
    I introduced a mapping for a business object which has (among others) a property called "Name": public class Foo : BusinessObjectBase { ... public virtual string Name { get; set; } } For some reason, when I fetch "Foo" objects, NHibernate seems to apply lazy property loading (for simple properties, not associations): The following code piece generates n+1 SQL statements, whereof the first only fetches the ids, and the remaining n fetch the Name for each record: ISession session = ...IQuery query = session.CreateQuery(queryString); ITransaction tx = session.BeginTransaction(); List<Foo> result = new List<Foo>(); foreach (Foo foo in query.Enumerable()) { result.Add(foo); } tx.Commit(); session.Close(); produces: NHibernate: select foo0_.FOO_ID as col_0_0_ from V1_FOO foo0_ NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 81 NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36470 NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36473 Similarly, the following code leads to a LazyLoadingException after session is closed: ISession session = ... ITransaction tx = session.BeginTransaction(); Foo result = session.Load<Foo>(id); tx.Commit(); session.Close(); Console.WriteLine(result.Name); Following this post, "lazy properties ... is rarely an important feature to enable ... (and) in Hibernate 3, is disabled by default." So what am I doing wrong? I managed to work around the LazyLoadingException by doing a NHibernateUtil.Initialize(foo) but the even worse part are the n+1 sql statements which bring my application to its knees. This is how the mapping looks like: <class name="Foo" table="V1_FOO"> ... <property name="Name" column="NAME"/> </class> BTW: The abstract "BusinessObjectBase" base class encapsulates the ID property which serves as the internal identifier.

    Read the article

  • asp.net C# windows authentication iss config

    - by user1566209
    I'm developing a webpage where a need to know the users windows authentication values, more precisely the name. Others developments have been done with this kind of authentication but sadly for me their creators are long gone and i have no contact or documentation. I'm using Visual Studio 2008 and i'm accessing a webservice that is in a remote server. The server is a windows server 2008 r2 standard and is using ISS version 7.5. Since i have the source code of the other developments what i did was copy paste and was working fine when i was calling the webservice that was in my machine (localhost). The code is the following: //1st way WindowsPrincipal wp = new WindowsPrincipal(WindowsIdentity.GetCurrent()); string strUser = wp.Identity.Name;//ALWAYS GET NT AUTHORITY\NETWORK SERVICE //2nd way WindowsIdentity winId = WindowsIdentity.GetCurrent(); WindowsPrincipal winPrincipal = new WindowsPrincipal(winId); string user = winPrincipal.Identity.Name;//ALWAYS GET NT AUTHORITY\NETWORK SERVICE //3rd way IIdentity WinId = HttpContext.Current.User.Identity; WindowsIdentity wi = (WindowsIdentity)WinId; string userstr = wi.Name; //ALWAYS GET string empty btn_select.Text = userstr; btn_cancelar.Text = strUser; btn_gravar.Text = user; As you can see i have here 3 ways to get the same and in a sad manner show my user's name. As for my web.config i have: <authentication mode="Windows"/> <identity impersonate="true" /> In the IIS manager i have tried lots of combination of enable and disable between Anonymous Authentication, ASP.NET Impersonation, Basic Authentication, Forms Authentication and Windows Authentication. Can please someone help me?? NOTE: The respective values i get from each try are in the code

    Read the article

  • Change an Image while using JCrop

    - by Bathan
    Hi guys! Im working on a new feature in my site and I got stucked really bad. Im using JCrop obviously to crop an Image on my website. The new feature that I've been asked to Implement is to allow the user to change the colors of the Image being cropped. I have now 3 images , Color, GrayScale and Sepia. I can change the source of the image tag using javascript so the image gets changed without reload but I cannot do this once the JCrop has been enabled because it replaces the original Image for a new one. I thought I could disable JCrop, Replace the Image and then Re-Enable, but i couldnt do such thing. The example I found where the JCrop gets destroyed (example5 in Demo zip) uses an object : jcrop_api = $.Jcrop('#cropbox'); But Im enabling JCrop in a different manner,more like Example 3 : jQuery('#cropbox').Jcrop({ onChange: showPreview, onSelect: showPreview, aspectRatio: 1 }); How can I destroy JCrop so I can replace te Image? Is there another way to do this? Please help! I Could easily reload the page each time the user changes de color of the image but we all know that's not cool. Any comments will be apreciated. Thanks

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >