Search Results

Search found 27870 results on 1115 pages for 'standard output'.

Page 570/1115 | < Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >

  • Gplv3 and consulting

    - by Mjgp2
    If you are doing a one off piece of work as a consultant, for which the client receives only binaries, can you use gpl3 software ? You are working on behalf of the client and not distributing it. From a standard gpl FAQ: Q: I am an OXID partner/reseller/consultant and I occasionally do one-time OXID eShop Community Edition customization work for clients which are derived from OXID modules. These customizations are used only for the client. Do I need to make these changes publicly available? A: No, unless your client redistributes the code in which case the client must make it available to its licensees. Is this implying it is ok?

    Read the article

  • Flash under dhtml menu

    - by David
    I'm struggling with this probelm for few hours and it's drives me crazy. I want my drop down menu put over flash area and it works but only in FF. Unfotunelly IE and Opera shows my menu under flash. The DHTML menu system is the simplest as possible and it was wrote from scratch by me. I've been trying everything, and still it doesn't work like it should. I tried to put the flash element by jquery.flashEmbed script and by standard code with param transparent, but it never works. Plese help me, I'm loosing my head. Here is the xhtml: http://www.project.yamandi.com/toton/ Regards, David

    Read the article

  • Sharepoint and Cross-Site Lookup

    - by Mina Samy
    Hi all I have this scenario I want to build two sharepoint 2007 sites. One for customers info and the other for products and customers orders. Now the problem is that in the second site I need to reference the customers info from the first site but unfortunately sharepoint doesnot provide out of the box cross-site lookup. I did some search and found custom cross-site fields and used one but when I upgraded the site to sharepoint 2010 this custom field was not compatible and the upgrade wizard said it could not be upgraded. so what is the solution for this ? is it to merge the two sites so that I can use the standard lookup feature or is there any workaround for this ? please if any body has faced such a scenario, share the solution with me ? thanks

    Read the article

  • iPhone – Best method to import/drawing UI graphic elements? CGContextDrawPDFPage?

    - by Ross
    Hello, What is the best way to use the custom UI graphics on the iPhone? I've come across CGContextDrawPDFPage and Panic's Shrinkit. Should I be using storing my vector ui graphics as PDF's and loading them using CGContextDrawPDFPage to draw them. I did previously asked what way Apple store their UI graphics and was answered crushed png. The options as I see it, but I would really want to know what technique other people use. This question is for vector graphics only. Looking for what is standard / most effective / most efficient. PNG (bitmapped image) Custom UIView drawing code (generated from Opacity) PDF (I've not used this method, is it with CGContextDrawPDFPage?) Many thanks Ross

    Read the article

  • Why does WPF toggle button pulse once unchecked

    - by randyc
    Wihtin a Desktop app I have a toggle button: For the Checked event I am setting a value and then executing a method FilterView(); //ommitting code Unchecked state is just the opposite. resets a variable and executed the method again The question I have is I noticed when I uncheck the toggle button the button continues to pulse or flash ( going from blue to chrome) as if it still has focus. The button will stay like this until another button is clicked. Is there a way to remove this focus so that when the button is unchecked the button goes back to a unchecked state without the flashing / pulsing color. As you can see from above this is a standard toggle button no styles or custom I tested this on just a regular button and I found the same occured when clicked the button will continue to pulse / flash until another button is clicked. How do you work around this or prevent this effect from happening. Thank you

    Read the article

  • solved: puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work? Update I added verbose logging to the puppet master and restarted nginx; here's the additional info I see in logs Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Could not resolve 10.209.47.31: no name for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 access[/] (info): defaulting to no access for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 Puppet (warning): Denying access: Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 10.209.47.31 - - [10/Dec/2012:18:19:15 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" On the agent machine facter fqdn and hostname both return a fully qualified host name [amisr1@blramisr195602 ~]$ sudo facter fqdn blramisr195602.XXXXXXX.com I then updated the agent configuration to add dns_alt_names = 10.209.47.31 cleaned all certificates on master and agent and regenerated the certificates and signed them on master using the option --allow-dns-alt-names [amisr1@bangvmpllDA02 ~]$ sudo puppet cert sign blramisr195602.XXXXXX.com Error: CSR 'blramisr195602.XXXXXX.com' contains subject alternative names (DNS:10.209.47.31, DNS:blramisr195602.XXXXXX.com), which are disallowed. Use `puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com` to sign this request. [amisr1@bangvmpllDA02 ~]$ sudo puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com Signed certificate request for blramisr195602.XXXXXX.com Removing file Puppet::SSL::CertificateRequest blramisr195602.XXXXXX.com at '/var/lib/puppet/ssl/ca/requests/blramisr195602.XXXXXX.com.pem' however, that doesn't help either; I get same errors as before. Not sure why in the logs it shows comparing access rules by IP and not hostname. Is there any Nginx configuration to change this behavior?

    Read the article

  • Cfsearch in combination of documents and indexed query data?

    - by Bart B
    hi! I have an application which stores all kind of data about people. The current cfsearch functionality (in Verity) includes searching documents that are attached to these people. If i have 2 documents attached to 1 person, 1 with say ABC in it and the other with XYZ in it, my ideal searchresult for "ABC AND XYZ" would return the 1 person. But as both 'words' are indexed in different documents, the standard behaviour is not to return any result from the cfsearch, because the combination doesnt exist in any of the 2 documents. Is there any way to combine indexed documents and/or query data in a way that the search is executed in the combination of relevant docs and data? In my application that would mean that i could index all documents and data regarding people and have an intelligent 'global' search to find the right person. any pointers and help very much appreciated! (should Solr offer new possibilities in comparison to Verity, no problem!) thanks! Bart

    Read the article

  • Best practice to display POI in iPhone's MapKit?

    - by iamj4de
    Assuming I have a database of POI with their respective coordinates (longitude & latitude). What would be the "standard" way to display the POI as annotations around the user's current location? To elaborate: Given a zoom level, I guess I have to search through the database for all POI whose distance to the current location < a certain threshold, then create annotations for them. Or is there any smarter way? If the user zooms in/out, moves the map... I will need to redo the whole thing again? It seems that MapKit has a mechanism to cache/reuse annotations. Should I create a lot of them right away and let MapKit decides what to render when the visible region changes? I guess this would make the transition smoother, but also consumes more memory. What is your experience with this? Thanks.

    Read the article

  • CCSprite following a sequence of CGPoints

    - by pgb
    I'm developing a line drawing game, similar to Flight Control, Harbor Master, and others in the appstore, using Cocos2D. For this game, I need a CCSprite to follow a line that the user has drawn. I'm storing a sequence of CGPoint structs in an NSArray, based on the points I get in the touchesBegin and touchesMoved messages. I now have the problem of how to make my sprite follow them. I have a tick method, that is called at the framerate speed. In that tick method, based on the speed and current position of the sprite, I need to calculate its next position. Is there any standard way to achieve this? My current approach is calculate the line between the last "reference point" and the next one, and calculate the next point in that line. The problem I have is when the sprite "turns" (moves from one segment of the line to another). Any hint will be greatly appreciated.

    Read the article

  • WPF tabcontrol styling

    - by BrettRobi
    I've got a UI with a fairly standard look and feel. It has a column of icons on the left side which when clicked open a different user control on the right side. Currently I'm using separate controls for the selection icons and the usercontrol containment. I'm having strange focus issues that I am tired of trying to mitigate and am wondering if I could style a tabcontrol to look like my UI (under the assumption a tabcontrol would not have focus issues when navigating tabs). Here is a screenshot of the basic UI. The styling is mostly about how to get the tabcontrols page selection to look like my column of icons. Anyone want to throw their hat in the ring as to how I might accomplish this with a tabcontrol? My xaml is pretty weak at this point.

    Read the article

  • iPhone: How to detect if an EKEvent instance can be modified?

    - by Tom van Zummeren
    While working with the EventKit on iPhone I noticed that some events can exist which cannot be modified. Examples I encountered so far are birthdays and events synced with CalDAV. When you view the event's details in the standard built-in calendar app on iPhone the "Edit" button in the top-right corner is not visible in these cases, where it would be visible when viewing "normal" events. I've searched everywhere, read all documentation there is but I simply can't find anything that tells me how to detect this behavior! I can only detect it afterwards: edit an event's title save it to the event store check the event's title, if it has not changed it is not editable! I am looking for a way that I can detect the non-editable behavior of an event beforehand. I know this is possible because I've seen other calendar apps implement this correctly.

    Read the article

  • What's a good Minimal Server-Side Javascript Framework?

    - by Nick Retallack
    So I was writing a web app with web.py that uses plenty of client-side javascript, and my database is on couchdb so the queries are in javascript too, and eventually I just got to thinking, why not skip the python and go all javascript? Besides, some functions need to run once on the client and again on the server to make sure you're not spoofing, so why translate between javascript and python? So I'm looking for a simple lightweight javascript web framework. All I really need is the url routing, request and response stuff (standard wsgi?), and a way to hook into a big http server like nginx. What do you guys recommend?

    Read the article

  • How can I use a custom ValidationAttribute to ensure two properties match?

    - by Brandon Linton
    We're using xVal and the standard DataAnnotationsValidationRunner described here to collect validation errors from our domain objects and view models in ASP.NET MVC. I'd like to have a way to have that validation runner identify when two properties don't match through the use of custom DataAnnotations. Right now I'm forced into doing it outside of the runner, this way: if (!(model.FieldOne == model.FieldTwo)) errors.Add(new ErrorInfo("FieldTwo", "FieldOne must match FieldTwo", model.FieldTwo)); My question is: can this be done using property-level validation attributes, or am I forced into using class-level attributes (in which case, I'd have to modify the runner...and my follow up question would be how best to retrieve them in that case). Thanks!

    Read the article

  • Advantage Database Server: slow stored procedure performance.

    - by ie
    I have a question about a performance of stored procedures in the ADS. I created a simple database with the following structure: CREATE TABLE MainTable ( Id INTEGER PRIMARY KEY, Name VARCHAR(50), Value INTEGER ); CREATE UNIQUE INDEX MainTableName_UIX ON MainTable ( Name ); CREATE TABLE SubTable ( Id INTEGER PRIMARY KEY, MainId INTEGER, Name VARCHAR(50), Value INTEGER ); CREATE INDEX SubTableMainId_UIX ON SubTable ( MainId ); CREATE UNIQUE INDEX SubTableName_UIX ON SubTable ( Name ); CREATE PROCEDURE CreateItems ( MainName VARCHAR ( 20 ), SubName VARCHAR ( 20 ), MainValue INTEGER, SubValue INTEGER, MainId INTEGER OUTPUT, SubId INTEGER OUTPUT ) BEGIN DECLARE @MainName VARCHAR ( 20 ); DECLARE @SubName VARCHAR ( 20 ); DECLARE @MainValue INTEGER; DECLARE @SubValue INTEGER; DECLARE @MainId INTEGER; DECLARE @SubId INTEGER; @MainName = (SELECT MainName FROM __input); @SubName = (SELECT SubName FROM __input); @MainValue = (SELECT MainValue FROM __input); @SubValue = (SELECT SubValue FROM __input); @MainId = (SELECT MAX(Id)+1 FROM MainTable); @SubId = (SELECT MAX(Id)+1 FROM SubTable ); INSERT INTO MainTable (Id, Name, Value) VALUES (@MainId, @MainName, @MainValue); INSERT INTO SubTable (Id, Name, MainId, Value) VALUES (@SubId, @SubName, @MainId, @SubValue); INSERT INTO __output SELECT @MainId, @SubId FROM system.iota; END; CREATE PROCEDURE UpdateItems ( MainName VARCHAR ( 20 ), MainValue INTEGER, SubValue INTEGER ) BEGIN DECLARE @MainName VARCHAR ( 20 ); DECLARE @MainValue INTEGER; DECLARE @SubValue INTEGER; DECLARE @MainId INTEGER; @MainName = (SELECT MainName FROM __input); @MainValue = (SELECT MainValue FROM __input); @SubValue = (SELECT SubValue FROM __input); @MainId = (SELECT TOP 1 Id FROM MainTable WHERE Name = @MainName); UPDATE MainTable SET Value = @MainValue WHERE Id = @MainId; UPDATE SubTable SET Value = @SubValue WHERE MainId = @MainId; END; CREATE PROCEDURE SelectItems ( MainName VARCHAR ( 20 ), CalculatedValue INTEGER OUTPUT ) BEGIN DECLARE @MainName VARCHAR ( 20 ); @MainName = (SELECT MainName FROM __input); INSERT INTO __output SELECT m.Value * s.Value FROM MainTable m INNER JOIN SubTable s ON m.Id = s.MainId WHERE m.Name = @MainName; END; CREATE PROCEDURE DeleteItems ( MainName VARCHAR ( 20 ) ) BEGIN DECLARE @MainName VARCHAR ( 20 ); DECLARE @MainId INTEGER; @MainName = (SELECT MainName FROM __input); @MainId = (SELECT TOP 1 Id FROM MainTable WHERE Name = @MainName); DELETE FROM SubTable WHERE MainId = @MainId; DELETE FROM MainTable WHERE Id = @MainId; END; Actually, the problem I had - even so light stored procedures work very-very slow (about 50-150 ms) relatively to plain queries (0-5ms). To test the performance, I created a simple test (in F# using ADS ADO.NET provider): open System; open System.Data; open System.Diagnostics; open Advantage.Data.Provider; let mainName = "main name #"; let subName = "sub name #"; // INSERT let cmdTextScriptInsert = " DECLARE @MainId INTEGER; DECLARE @SubId INTEGER; @MainId = (SELECT MAX(Id)+1 FROM MainTable); @SubId = (SELECT MAX(Id)+1 FROM SubTable ); INSERT INTO MainTable (Id, Name, Value) VALUES (@MainId, :MainName, :MainValue); INSERT INTO SubTable (Id, Name, MainId, Value) VALUES (@SubId, :SubName, @MainId, :SubValue); SELECT @MainId, @SubId FROM system.iota;"; let cmdTextProcedureInsert = "CreateItems"; // UPDATE let cmdTextScriptUpdate = " DECLARE @MainId INTEGER; @MainId = (SELECT TOP 1 Id FROM MainTable WHERE Name = :MainName); UPDATE MainTable SET Value = :MainValue WHERE Id = @MainId; UPDATE SubTable SET Value = :SubValue WHERE MainId = @MainId;"; let cmdTextProcedureUpdate = "UpdateItems"; // SELECT let cmdTextScriptSelect = " SELECT m.Value * s.Value FROM MainTable m INNER JOIN SubTable s ON m.Id = s.MainId WHERE m.Name = :MainName;"; let cmdTextProcedureSelect = "SelectItems"; // DELETE let cmdTextScriptDelete = " DECLARE @MainId INTEGER; @MainId = (SELECT TOP 1 Id FROM MainTable WHERE Name = :MainName); DELETE FROM SubTable WHERE MainId = @MainId; DELETE FROM MainTable WHERE Id = @MainId;"; let cmdTextProcedureDelete = "DeleteItems"; let cnnStr = @"data source=D:\DB\test.add; ServerType=local; user id=adssys; password=***;"; let cnn = new AdsConnection(cnnStr); try cnn.Open(); let cmd = cnn.CreateCommand(); let parametrize ix prms = cmd.Parameters.Clear(); let addParam = function | "MainName" -> cmd.Parameters.Add(":MainName" , mainName + ix.ToString()) |> ignore; | "SubName" -> cmd.Parameters.Add(":SubName" , subName + ix.ToString() ) |> ignore; | "MainValue" -> cmd.Parameters.Add(":MainValue", ix * 3 ) |> ignore; | "SubValue" -> cmd.Parameters.Add(":SubValue" , ix * 7 ) |> ignore; | _ -> () prms |> List.iter addParam; let runTest testData = let (cmdType, cmdName, cmdText, cmdParams) = testData; let toPrefix cmdType cmdName = let prefix = match cmdType with | CommandType.StoredProcedure -> "Procedure-" | CommandType.Text -> "Script -" | _ -> "Unknown -" in prefix + cmdName; let stopWatch = new Stopwatch(); let runStep ix prms = parametrize ix prms; stopWatch.Start(); cmd.ExecuteNonQuery() |> ignore; stopWatch.Stop(); cmd.CommandText <- cmdText; cmd.CommandType <- cmdType; let startId = 1500; let count = 10; for id in startId .. startId+count do runStep id cmdParams; let elapsed = stopWatch.Elapsed; Console.WriteLine("Test '{0}' - total: {1}; per call: {2}ms", toPrefix cmdType cmdName, elapsed, Convert.ToInt32(elapsed.TotalMilliseconds)/count); let lst = [ (CommandType.Text, "Insert", cmdTextScriptInsert, ["MainName"; "SubName"; "MainValue"; "SubValue"]); (CommandType.Text, "Update", cmdTextScriptUpdate, ["MainName"; "MainValue"; "SubValue"]); (CommandType.Text, "Select", cmdTextScriptSelect, ["MainName"]); (CommandType.Text, "Delete", cmdTextScriptDelete, ["MainName"]) (CommandType.StoredProcedure, "Insert", cmdTextProcedureInsert, ["MainName"; "SubName"; "MainValue"; "SubValue"]); (CommandType.StoredProcedure, "Update", cmdTextProcedureUpdate, ["MainName"; "MainValue"; "SubValue"]); (CommandType.StoredProcedure, "Select", cmdTextProcedureSelect, ["MainName"]); (CommandType.StoredProcedure, "Delete", cmdTextProcedureDelete, ["MainName"])]; lst |> List.iter runTest; finally cnn.Close(); And I'm getting the following results: Test 'Script -Insert' - total: 00:00:00.0292841; per call: 2ms Test 'Script -Update' - total: 00:00:00.0056296; per call: 0ms Test 'Script -Select' - total: 00:00:00.0051738; per call: 0ms Test 'Script -Delete' - total: 00:00:00.0059258; per call: 0ms Test 'Procedure-Insert' - total: 00:00:01.2567146; per call: 125ms Test 'Procedure-Update' - total: 00:00:00.7442440; per call: 74ms Test 'Procedure-Select' - total: 00:00:00.5120446; per call: 51ms Test 'Procedure-Delete' - total: 00:00:01.0619165; per call: 106ms The situation with the remote server is much better, but still a great gap between plaqin queries and stored procedures: Test 'Script -Insert' - total: 00:00:00.0709299; per call: 7ms Test 'Script -Update' - total: 00:00:00.0161777; per call: 1ms Test 'Script -Select' - total: 00:00:00.0258113; per call: 2ms Test 'Script -Delete' - total: 00:00:00.0166242; per call: 1ms Test 'Procedure-Insert' - total: 00:00:00.5116138; per call: 51ms Test 'Procedure-Update' - total: 00:00:00.3802251; per call: 38ms Test 'Procedure-Select' - total: 00:00:00.1241245; per call: 12ms Test 'Procedure-Delete' - total: 00:00:00.4336334; per call: 43ms Is it any chance to improve the SP performance? Please advice. ADO.NET driver version - 9.10.2.9 Server version - 9.10.0.9 (ANSI - GERMAN, OEM - GERMAN) Thanks!

    Read the article

  • Sharing Bandwidth and Prioritizing Realtime Traffic via HTB, Which Scenario Works Better?

    - by Mecki
    I would like to add some kind of traffic management to our Internet line. After reading a lot of documentation, I think HFSC is too complicated for me (I don't understand all the curves stuff, I'm afraid I will never get it right), CBQ is not recommend, and basically HTB is the way to go for most people. Our internal network has three "segments" and I'd like to share bandwidth more or less equally between those (at least in the beginning). Further I must prioritize traffic according to at least three kinds of traffic (realtime traffic, standard traffic, and bulk traffic). The bandwidth sharing is not as important as the fact that realtime traffic should always be treated as premium traffic whenever possible, but of course no other traffic class may starve either. The question is, what makes more sense and also guarantees better realtime throughput: Creating one class per segment, each having the same rate (priority doesn't matter for classes that are no leaves according to HTB developer) and each of these classes has three sub-classes (leaves) for the 3 priority levels (with different priorities and different rates). Having one class per priority level on top, each having a different rate (again priority won't matter) and each having 3 sub-classes, one per segment, whereas all 3 in the realtime class have highest prio, lowest prio in the bulk class, and so on. I'll try to make this more clear with the following ASCII art image: Case 1: root --+--> Segment A | +--> High Prio | +--> Normal Prio | +--> Low Prio | +--> Segment B | +--> High Prio | +--> Normal Prio | +--> Low Prio | +--> Segment C +--> High Prio +--> Normal Prio +--> Low Prio Case 2: root --+--> High Prio | +--> Segment A | +--> Segment B | +--> Segment C | +--> Normal Prio | +--> Segment A | +--> Segment B | +--> Segment C | +--> Low Prio +--> Segment A +--> Segment B +--> Segment C Case 1 Seems like the way most people would do it, but unless I don't read the HTB implementation details correctly, Case 2 may offer better prioritizing. The HTB manual says, that if a class has hit its rate, it may borrow from its parent and when borrowing, classes with higher priority always get bandwidth offered first. However, it also says that classes having bandwidth available on a lower tree-level are always preferred to those on a higher tree level, regardless of priority. Let's assume the following situation: Segment C is not sending any traffic. Segment A is only sending realtime traffic, as fast as it can (enough to saturate the link alone) and Segment B is only sending bulk traffic, as fast as it can (again, enough to saturate the full link alone). What will happen? Case 1: Segment A-High Prio and Segment B-Low Prio both have packets to send, since A-High Prio has the higher priority, it will always be scheduled first, till it hits its rate. Now it tries to borrow from Segment A, but since Segment A is on a higher level and Segment B-Low Prio has not yet hit its rate, this class is now served first, till it also hits the rate and wants to borrow from Segment B. Once both have hit their rates, both are on the same level again and now Segment A-High Prio is going to win again, until it hits the rate of Segment A. Now it tries to borrow from root (which has plenty of traffic spare, as Segment C is not using any of its guaranteed traffic), but again, it has to wait for Segment B-Low Prio to also reach the root level. Once that happens, priority is taken into account again and this time Segment A-High Prio will get all the bandwidth left over from Segment C. Case 2: High Prio-Segment A and Low Prio-Segment B both have packets to send, again High Prio-Segment A is going to win as it has the higher priority. Once it hits its rate, it tries to borrow from High Prio, which has bandwidth spare, but being on a higher level, it has to wait for Low Prio-Segment B again to also hit its rate. Once both have hit their rate and both have to borrow, High Prio-Segment A will win again until it hits the rate of the High Prio class. Once that happens, it tries to borrow from root, which has again plenty of bandwidth left (all bandwidth of Normal Prio is unused at the moment), but it has to wait again until Low Prio-Segment B hits the rate limit of the Low Prio class and also tries to borrow from root. Finally both classes try to borrow from root, priority is taken into account, and High Prio-Segment A gets all bandwidth root has left over. Both cases seem sub-optimal, as either way realtime traffic sometimes has to wait for bulk traffic, even though there is plenty of bandwidth left it could borrow. However, in case 2 it seems like the realtime traffic has to wait less than in case 1, since it only has to wait till the bulk traffic rate is hit, which is most likely less than the rate of a whole segment (and in case 1 that is the rate it has to wait for). Or am I totally wrong here? I thought about even simpler setups, using a priority qdisc. But priority queues have the big problem that they cause starvation if they are not somehow limited. Starvation is not acceptable. Of course one can put a TBF (Token Bucket Filter) into each priority class to limit the rate and thus avoid starvation, but when doing so, a single priority class cannot saturate the link on its own any longer, even if all other priority classes are empty, the TBF will prevent that from happening. And this is also sub-optimal, since why wouldn't a class get 100% of the line's bandwidth if no other class needs any of it at the moment? Any comments or ideas regarding this setup? It seems so hard to do using standard tc qdiscs. As a programmer it was such an easy task if I could simply write my own scheduler (which I'm not allowed to do).

    Read the article

  • iPhone Settings app and keyboard control?

    - by randallmeadows
    So I'm putting my app's preference settings into the Settings app. One of the settings is an edit text field (PSTextFieldSpecifier). When touched, the keyboard dutifully appears, I can make the edits, but when I press Return....nothing. Well, the editing is completed, but the keyboard remains. I see no way to make the keyboard go away. I also notice this same behavior in other Settings panes, including those from Apple. Do I assume correctly that this is just standard behavior and I need to just accept the fact that my Settings table has now been reduced to half size, and just deal? Furthermore, I gather there is no approved way to have a "rich" child pane display, such as that seen in Settings-General-About-Legal? Or a way to do what appears to be a -presentModalViewController, a la Settings-General-Passcode Lock?

    Read the article

  • django-admin formfield_for_* change default value per/depending on instance

    - by Nick Ma.
    Hi, I'm trying to change the default value of a foreignkey-formfield to set a Value of an other model depending on the logged in user. But I'm racking my brain on it... This: Changing ForeignKey’s defaults in admin site would an option to change the empty_label, but I need the default_value. #Now I tried the following without errors but it didn't had the desired effect: class EmployeeAdmin(admin.ModelAdmin): ... def formfield_for_foreignkey(self, db_field, request=None, **kwargs): formfields= super(EmployeeAdmin, self).formfield_for_foreignkey(db_field, request, **kwargs) if request.user.is_superuser: return formfields if db_field.name == "company": #This is the RELEVANT LINE kwargs["initial"] = request.user.default_company return db_field.formfield(**kwargs) admin.site.register(Employee, EmployeeAdmin) ################################################################## # REMAINING Setups if someone would like to know it but i think # irrelevant concerning the problem ################################################################## from django.contrib.auth.models import User, UserManager class CompanyUser(User): ... objects = UserManager() company = models.ManyToManyField(Company) default_company= models.ForeignKey(Company, related_name='default_company') #I registered the CompanyUser instead of the standard User, # thats all up and working ... class Employee(models.Model): company = models.ForeignKey(Company) ... Hint: kwargs["default"] ... doesn't exist. Thanks in advance, Nick

    Read the article

  • Receiving "MERGE" 200 OK error when committing using trac-post-commit-hook

    - by Lyon Blecher
    When running a commit with the trac-post-commit-hook I receive a MERGE 200 OK error, I understand that this means that the commit has succeeded on the server but the file status has not updated on my local machine. But I can't find anyway to fix this issue. Would this be a problem with my setup or something in the script. I'm using stock standard script from the trac site, I'm committing through tortoiseSVN to VisualSVN Server which is hosted on a windows 2008 server. When I run the script through a command line I receive no errors, I only receive this error through TortoiseSVN.

    Read the article

  • Clear fields on CreateUserWizard, Login control

    - by Midhat
    I have a createuserwizard and a login control on a page. both of them are customized (standard textboxes are replaced by RadTextBoxes) When i enter a value in the form and refresh the browser without submitting, the forms retain their values. Is there any way i can clear these fields on refresh. I have tried settinf EnableViewState false on the controls (as seen somewhere on the web) but it doesnt work I have added code in page load to clear the fields if the page !IsPostBack. it looks something like this if (!IsPostBack) { ((RadTextBox)Login1.FindControl("Username")).Text=""; ((RadTextBox)Login1.FindControl("Password")).Text = ""; ((RadTextBox)CreateUserWizard1.CreateUserStep.ContentTemplateContainer.FindControl("Username")).Text = ""; ((RadTextBox)CreateUserWizard1.CreateUserStep.ContentTemplateContainer.FindControl("Password")).Text = ""; ((RadTextBox)CreateUserWizard1.CreateUserStep.ContentTemplateContainer.FindControl("confirmPassword")).Text = ""; ((RadTextBox)CreateUserWizard1.CreateUserStep.ContentTemplateContainer.FindControl("Email")).Text = ""; } Still of no avail Any suggestions

    Read the article

  • Rails & ActiveRecord: Appending methods to models that inherit from ActiveRecord::Base

    - by PlankTon
    I have a standard ActiveRecord model with the following: class MyModel < ActiveRecord::Base custom_method :first_field, :second_field end At the moment, that custom_method is picked up by a module sent to ActiveRecord::Base. The functionality basically works, but of course, it attaches itself to every model class, not just MyModel. So if I have MyModel and MyOtherModel in the same action, it'll assume MyOtherModel has custom_method :first_field, :second_field as well. So, my question is: How do I attach a method (eg: def custom_method(*args)) to every class that inherits from ActiveRecord::Base, but not by attaching it to ActiveRecord::Base itself? Any ideas appreciated.

    Read the article

  • Eclipse: Subclipse "Edit Conflicts"

    - by Wilco
    I have found that when using Subclipse to edit conflicts, all my syntax color settings are preserved except for the background color, which is reset to the standard white. Using my particular color scheme makes it almost impossible to read any of the text when stuck with a white background. Is there anywhere I can change this default background color? There doesn't seem to be any way to do this from the preferences window, but perhaps there is a config file somewhere I could edit? Any help would be very much appreciated (my eyes will thank you too)!

    Read the article

  • Decode sparse json array to php array

    - by Isaac Sutherland
    I can create a sparse php array (or map) using the command: $myarray = array(10=>'hi','test20'=>'howdy'); I want to serialize/deserialize this as JSON. I can serialize it using the command: $json = json_encode($myarray); which results in the string {"10":"hi","test20":"howdy"}. However, when I deserialize this and cast it to an array using the command: $mynewarray = (array)json_decode($json); I seem to lose any mappings with keys which were not valid php identifiers. That is, mynewarray has mapping 'test20'=>'howdy', but not 10=>'hi' nor '10'=>'hi'. Is there a way to preserve the numerical keys in a php map when converting to and back from json using the standard json_encode / json_decode functions?

    Read the article

  • How to convert latitude or longitude to meters?

    - by Adam Taylor
    Hi, If I have a latitude or longitude reading in standard NMEA format is there an easy way / forumla to convert that reading to meters, which I can then implement in Java (J9)? Edit: Ok seems what I want to do is not possible /easily/, however what I really want to do is: Say I have a lat and long of a way point and a lat and long of a user is there an easy way to compare them to decide when to tell the user they are within a /reasonably/ close distance of the way point? I realise reasonable is subject but is this easily do-able or still overly maths-y? Thanks, Adam

    Read the article

  • Microsoft Sync Framework - How to reprovision a table (or entire scope) fater schema changes?

    - by Rabbi
    B"H I have already setup Syncing with Microsoft Sync Framework, and now I need to add fields to a table. How do I re-provision the databases? The setup is exceedingly simple: Two sql express 2008 servers The scope includes the entire database Using Microsoft Sync Framework 2.0 Synchronizing by direct access. Using the standard new SqlSyncProvider Do I make the structural changes at both ends? Or do I only change one Server and let Sync Framework somehow propagate the change? Do I need to delete the _tracking tables and/or the stored procedures? How about the triggers? Has anyone been using the Sync Framework? Please help.

    Read the article

  • Is there a format or service for resume/CV data?

    - by Ben Dauphinee
    I have noticed through the process of signing up for various freelance and job seeking or professional network sites that they all want your resume/CV data. And I am really getting tired of copy/pasting this data, especially since I have a website. Is there a standard format or service somewhere that I do not know about for this data? If not, does anyone want to help me build something like this out? I'm thinking a service similar to OpenID that allows you to maintain a central resume to have your data pulled from. No more filling in the same data over and over, and having to maintain the copies on any of the plethora of websites that have that data. Takers?

    Read the article

< Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >