Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 642/837 | < Previous Page | 638 639 640 641 642 643 644 645 646 647 648 649  | Next Page >

  • 404 Error Hosting WCF Service via IIS 7.5 Shared Content

    - by Chad Gruka
    We're attempting to host a WCF Service (.NET 3.5 SP1) using Shared Content on IIS 7.5. At the moment it's returning a 404 error. My assumption at this point is that WCF can not be hosted via a UNC path (See workaroundHosting WCF service in IIS6 using UNC). Steps I've taken: - Established a FullTrust to/with the UNC path. - The service works hosting it on a local disk. - A basic HTML page renders without issue from the UNC path. - A ASPX page renders without issue from the UNC path. - Explicitly set "Full Control" permissions to the user running the service. The reason for using Shared Content in IIS 7.5 to host this WCF Service, and several other websites, in a web farm. Using Shared Content avoids the need for file replication between the nodes in the farm. (Note we are also using Shared Configuration to support this environment.)

    Read the article

  • Multi page forms on ASP.NET MVC

    - by Jay
    Hi, I have decided to use ASP.NET MVC to develop multi page (registration) forms in asp.net. There will be two buttons on each page that allows the user to navigate to the previous and next page. When the user navigates back to a page they recently filled out, the data should be displayed to them. I understand ASP.NET MVC should remain stateless but how should I maintain page information when the user navigates back and forth. Should I? Save the information to a database and retrieve information for each page change? save information to the session? Load all the fields and display only whats's needed with javascript? This registration form is going to be used in multiple sites but with different sets of questions (Some may be the same). IF performance is a main concern, should I avoid generating these forms dynamically? Jay

    Read the article

  • Robust and fast checksum algorithm?

    - by bene
    Which checksum algorithm can you recommend in the following use case? I want to generate checksums of small JPEG files (~8 kB each) to check if the content changed. Using the filesystem's date modified is unfortunately not an option. The checksum need not be cryptographically strong but it should robustly indicate changes of any size. The second criterion is speed since it should be possible to process at least hundreds of images per second (on a modern CPU). The calculation will be done on a server with several clients. The clients send the images over Gigabit TCP to the server. So there's no disk I/O as bottleneck.

    Read the article

  • Service broker with SqlNotificationRequest

    - by user171523
    I am in the process of evaluating a Service Broker with SQL noticiation for my project. My requirements is User places a order from System A and it will update Order Table. As soon as order is place i need to notify the System B. I have done a quick POC with Trigger , Service Broker and SQLNotificaiton ADO.NET. It is working as i expected. What i would like to know the group A) What are the best practices i need to follow for this? B) What are disadvantages with the above approach if any? C) Are there any disadvantes of using the Triggers? If so what are those for above approach? The order table will get order from System A like 1000 to 1500 every day. I also would like to know the performance of above approach.

    Read the article

  • Problem in In-App purchase-consumable model

    - by kunal-dutta
    I have created a nonconsumable in app purchase item and now I want to create a consumable in-app purchase by which a user to buy it every time he uses it,and also I want to create a subscription model In-App purchase. Everything works as expected except when I buy the item more than one time, iPhone pop ups a message saying "You've already purchased the item. Do You want to buy it again?". Is It possible to disable this dialog and proceed to the actual purchase?And what will have to change in following code with different model:- in InApp purchase manager.m: @implementation InAppPurchaseManager //@synthesize purchasableObjects; //@synthesize storeObserver; @synthesize proUpgradeProduct; @synthesize productsRequest; //BOOL featureAPurchased; //BOOL featureBPurchased; //static InAppPurchaseManager* _sharedStoreManager; // self (void)dealloc { //[_sharedStoreManager release]; //[storeObserver release]; [super dealloc]; } (void)requestProUpgradeProductData { NSSet *productIdentifiers = [NSSet setWithObject:@"com.vigyaapan.iWorkOut1" ]; productsRequest = [[SKProductsRequest alloc] initWithProductIdentifiers:productIdentifiers]; productsRequest.delegate = self; [productsRequest start]; // we will release the request object in the delegate callback } pragma mark - pragma mark SKProductsRequestDelegate methods (void)productsRequest:(SKProductsRequest *)request didReceiveResponse:(SKProductsResponse *)response { //NSArray *products = response.products; //proUpgradeProduct = [products count] == 1 ? [[products firstObject] retain]: nil; if (proUpgradeProduct) { NSLog(@"Product title: %@", proUpgradeProduct.localizedTitle); NSLog(@"Product description: %@", proUpgradeProduct.localizedDescription); NSLog(@"Product price: %@", proUpgradeProduct.price); NSLog(@"Product id:%@", proUpgradeProduct.productIdentifier); } /*for (NSString invalidProductId in response.invalidProductIdentifiers) { NSLog(@"Invalid product id: %@" , invalidProductId); }/ //finally release the reqest we alloc/init’ed in requestProUpgradeProductData [productsRequest release]; [[NSNotificationCenter defaultCenter] postNotificationName:kInAppPurchaseManagerProductsFetchedNotification object:self userInfo:nil]; } pragma - pragma Public methods /* call this method once on startup*/ (void)loadStore { /* restarts any purchases if they were interrupted last time the app was open*/ [[SKPaymentQueue defaultQueue] addTransactionObserver:self]; /* get the product description (defined in early sections)*/ [self requestProUpgradeProductData]; } /* call this before making a purchase*/ (BOOL)canMakePurchases { return [SKPaymentQueue canMakePayments]; } /* kick off the upgrade transaction*/ (void)purchaseProUpgrade { SKPayment *payment = [SKPayment paymentWithProductIdentifier:@"9820091347"]; [[SKPaymentQueue defaultQueue] addPayment:payment]; } pragma - pragma Purchase helpers /* saves a record of the transaction by storing the receipt to disk*/ (void)recordTransaction:(SKPaymentTransaction )transaction { if ([transaction.payment.productIdentifier isEqualToString:kInAppPurchaseProUpgradeProductId]) { / save the transaction receipt to disk*/ [[NSUserDefaults standardUserDefaults] setValue:transaction.transactionReceipt forKey:@"proUpgradeTransactionReceipt" ]; [[NSUserDefaults standardUserDefaults] synchronize]; } } /* enable pro features*/ (void)provideContent:(NSString )productId { if ([productId isEqualToString:kInAppPurchaseProUpgradeProductId]) { / enable the pro features*/ [[NSUserDefaults standardUserDefaults] setBool:YES forKey:@"isProUpgradePurchased" ]; [[NSUserDefaults standardUserDefaults] synchronize]; } } (void)finishTransaction:(SKPaymentTransaction )transaction wasSuccessful:(BOOL)wasSuccessful { // / remove the transaction from the payment queue.*/ [[SKPaymentQueue defaultQueue] finishTransaction:transaction]; NSDictionary *userInfo = [NSDictionary dictionaryWithObjectsAndKeys:transaction, @"transaction" , nil]; if (wasSuccessful) { /* send out a notification that we’ve finished the transaction*/ [[NSNotificationCenter defaultCenter]postNotificationName:kInAppPurchaseManagerTransactionSucceededNotification object:self userInfo:userInfo]; } else { /* send out a notification for the failed transaction*/ [[NSNotificationCenter defaultCenter] postNotificationName:kInAppPurchaseManagerTransactionFailedNotification object:self userInfo:userInfo]; } } (void)completeTransaction:(SKPaymentTransaction *)transaction { [self recordTransaction:transaction]; [self provideContent:transaction.payment.productIdentifier]; [self finishTransaction:transaction wasSuccessful:YES]; } (void)restoreTransaction:(SKPaymentTransaction *)transaction { [self recordTransaction:transaction.originalTransaction]; [self provideContent:transaction.originalTransaction.payment.productIdentifier]; [self finishTransaction:transaction wasSuccessful:YES]; } (void)failedTransaction:(SKPaymentTransaction )transaction { if (transaction.error.code != SKErrorPaymentCancelled) { / error!/ [self finishTransaction:transaction wasSuccessful:NO]; } else { / this is fine, the user just cancelled, so don’t notify*/ [[SKPaymentQueue defaultQueue] finishTransaction:transaction]; } } (void)paymentQueue:(SKPaymentQueue *)queue updatedTransactions:(NSArray *)transactions { for (SKPaymentTransaction *transaction in transactions) { switch (transaction.transactionState) { case SKPaymentTransactionStatePurchased: [self completeTransaction:transaction]; break; case SKPaymentTransactionStateFailed: [self failedTransaction:transaction]; break; case SKPaymentTransactionStateRestored: [self restoreTransaction:transaction]; break; default: break; } } } @end in SKProduct.m:- @implementation SKProduct (LocalizedPrice) - (NSString *)localizedPrice { NSNumberFormatter *numberFormatter = [[NSNumberFormatter alloc] init]; [numberFormatter setFormatterBehavior:NSNumberFormatterBehavior10_4]; [numberFormatter setNumberStyle:NSNumberFormatterCurrencyStyle]; [numberFormatter setLocale:self.priceLocale]; NSString *formattedString = [numberFormatter stringFromNumber:self.price]; [numberFormatter release]; return formattedString; }

    Read the article

  • Why shouldn't I always use nullable types in C#.

    - by Matthew Vines
    I've been searching for some good guidance on this since the concept was introduced in .net 2.0. Why would I ever want to use non-nullable data types in c#? (A better question is why wouldn't I choose nullable types by default, and only use non-nullable types when that explicitly makes sense.) Is there a 'significant' performance hit to choosing a nullable data type over its non-nullable peer? I much prefer to check my values against null instead of Guid.empty, string.empty, DateTime.MinValue,<= 0, etc, and to work with nullable types in general. And the only reason I don't choose nullable types more often is the itchy feeling in the back of my head that makes me feel like it's more than backwards compatibility that forces that extra '?' character to explicitly allow a null value. Is there anybody out there that always (most always) chooses nullable types rather than non-nullable types? Thanks for your time,

    Read the article

  • MVVM Good Design. DataSet or a RowViewModel

    - by LnDCobra
    I have just started learning MVVM and having a dilemna. If I have a a main ViewModel and inside this model I have a number of datasets. Now should I be creating a new ViewModel for each row inside the dataset? Or expose the DataSet itself as a DependencyProperty? For now the dataset has about 20 rows inside it, and the thought of iterating through each row to create a ViewModel binding to each row.... might not be the best option for performance reasons and memory reasons in the future, like when there are 1000+ rows. Should I still go ahead and create a RowViewModel and iterate through the dataset? And have an ObservableCollection of it or just expose the dataset? Any help would be greatly appreciated.

    Read the article

  • Dynamically generate PDF and email it using django

    - by Shane
    I have a django app that dynamically generates a PDF (using reportlab + pypdf) from user input on an HTML form, and returns the HTTP response with an application/pdf MIMEType. I want to have the option between doing the above, or emailing the generated pdf, but I cannot figure out how to use the EmailMessage class's attach(filename=None, content=None, mimetype=None) method. The documentation doesn't give much of a description of what kind of object content is supposed to be. I've tried a file object and the above application/pdf HTTP response. I currently have a workaround where my view saves a pdf to disk, and then I attach the resulting file to an outgoing email using the attach_file() method. This seems wrong to me, and I'm pretty sure there is a better way.

    Read the article

  • com.sun.management.OperatingSystemMXBean use in an OSGi bundle

    - by Paul Whelan
    I have some legacy code that was used to monitor my applications cpu,memory etc that I want to convert to a bundle. Now when i start this bundle its complaining Missing Constraint: Import-Package: com.sun.management; version="0.0.0" I had used the OperatingSystemMXBean to get access to stats on the JVM. My question is can I use this class inside an OSGI container and if so how? Or should I use some other way to monitor my application. I was making an RMI call to the application from a web frontend to get the nodes performance figures pre OSGi.

    Read the article

  • F# currying efficiency?

    - by Eamon Nerbonne
    I have a function that looks as follows: let isInSet setElems normalize p = normalize p |> (Set.ofList setElems).Contains This function can be used to quickly check whether an element is semantically part of some set; for example, to check if a file path belongs to an html file: let getLowerExtension p = (Path.GetExtension p).ToLowerInvariant() let isHtmlPath = isInSet [".htm"; ".html"; ".xhtml"] getLowerExtension However, when I use a function such as the above, performance is poor since evaluation of the function body as written in "isInSet" seems to be delayed until all parameters are known - in particular, invariant bits such as (Set.ofList setElems).Contains are reevaluated each execution of isHtmlPath. How can best I maintain F#'s succint, readable nature while still getting the more efficient behavior in which the set construction is preevaluated. The above is just an example; I'm looking for a general pattern that avoids bogging me down in implementation details - where possible I'd like to avoid being distracted by details such as the implementation's execution order since that's usually not important to me and kind of undermines a major selling point of functional programming.

    Read the article

  • Nice name for `decorator' class?

    - by Lajos Nagy
    I would like to separate the API I'm working on into two sections: 'bare-bones' and 'cushy'. The idea is that all method calls in the 'cushy' section could be expressed in terms of the ones in the 'bare-bones' section, that is, they would only serve as convenience methods for the quick-and-dirty. The reason I would like to do this is that very often when people are beginning to use an API for the first time, they are not interested in details and performance: they just want to get it working. Anybody tried anything similar before? I'm particularly interested in naming conventions and organizing the code.

    Read the article

  • SQL Server, temporary tables with truncate vs table variable with delete

    - by Richard
    I have a stored procedure inside which I create a temporary table that typically contains between 1 and 10 rows. This table is truncated and filled many times during the stored procedure. It is truncated as this is faster than delete. Do I get any performance increase by replacing this temporary table with a table variable when I suffer a penalty for using delete (truncate does not work on table variables) Whilst table variables are mainly in memory and are generally faster than temp tables do I loose any benefit by having to delete rather than truncate?

    Read the article

  • Java Concurrency: CAS vs Locking

    - by Hugo Walker
    Im currently reading the Book Java Concurrency in Practice. In the Chapter 15 they are speaking about the Nonblocking algorithms and the compare-and-swap (CAS) Method. It is written that the CAS perform much better than the Locking Methods. I want to ask the people which already worked with both of this concepts and would like to hear when you are preferring which of these concept? Is it really so much faster? Personally for me the usage of Locks is much clearer and easier to understand and maybe even better to maintain. (Please correct me if I am wrong). Should we really focus creating our concurrent code related on CAS than Locks to get a better performance boost or is sustainability a higher thing? I know there is maybe not a strict rule, when to use what. But I just would like to hear some opinions, experiences with the new concept of CAS.

    Read the article

  • Language/tech specific books which improve vendor-neutral development skills

    - by dotnetdev
    If I ask what the following books have in common: "Accelerated C# 2010, C# in Depth, Pro C# 2008", the answer would be that they would help me to improve my understanding of C# and secondly, my general coding skills. What language-specific/tech-specific books (like those named above) would teach me a great deal about general programming techniques and good habits? I'm thinking Java books would be very good for me (I code in C# primarily), as both these languages are similar and so I am sure that specialist books on Java threading, performance tuning, etc, can be applied to C# (not all 100% content of a Java book). Thanks

    Read the article

  • NameValueCollection vs Dictionary<string,string>

    - by frankadelic
    Any reason I should use Dictionary<string,string instead of NameValueCollection? (in C# / .NET Framework) Option 1, using NameValueCollection: //enter values: NameValueCollection nvc = new NameValueCollection() { {"key1", "value1"}, {"key2", "value2"}, {"key3", "value3"} }; // retrieve values: foreach(string key in nvc.AllKeys) { string value = nvc[key]; // do something } Option 2, using Dictionary<string,string... //enter values: Dictionary<string, string> dict = new Dictionary<string, string>() { {"key1", "value1"}, {"key2", "value2"}, {"key3", "value3"} }; // retrieve values: foreach (KeyValuePair<string, string> kvp in dict) { string key = kvp.Key; string val = kvp.Value; // do something } For these use cases, is there any advantage to use one versus the other? Any difference in performance, memory use, sort order, etc.?

    Read the article

  • Can I do video communication with silverlight 4.0?

    - by tom greene
    With silverlight 4.0, it is possible to show a live video of the user on the screen: Here is the code VideoBrush videoBrush = new VideoBrush(); CaptureSource captureSource = new CaptureSource { VideoCaptureDevice = CaptureDeviceConfiguration.GetAvailableVideoCaptureDevices().First() }; bool b = CaptureDeviceConfiguration.RequestDeviceAccess(); videoBrush.SetSource(captureSource); captureSource.Start(); myrect.Fill = videoBrush; However, I am looking at a way to show the video to someone else - seeing oneself on screen is not that interesting. Is it possible? Do I need my own server? Can I use clowd services to do the communication? Are there performance issues?

    Read the article

  • PyPy -- How can it possible beat CPython?

    - by Vulcan Eager
    From the Google Open Source Blog: PyPy is a reimplementation of Python in Python, using advanced techniques to try to attain better performance than CPython. Many years of hard work have finally paid off. Our speed results often beat CPython, ranging from being slightly slower, to speedups of up to 2x on real application code, to speedups of up to 10x on small benchmarks. How is this possible? Which Python implementation was used to implement PyPy? CPython? And what are the chances of a PyPyPy or PyPyPyPy beating their score? (On a related note... why would anyone try something like this?)

    Read the article

  • Is there any killer application for Ontology/semantics/OWL/RDF yet?

    - by narnirajesh
    Hi Guys, I got interested in semantic technologies after reading a lot of books, blogs and articles on the net saying that it would make data machine-understandable, allow intelligent agents make great reasoning, automated & dynamic service composition etc.. I am still reading the same stuff from 2 years. The number of articles/blogs/semantic-conferences have increased considerably. But I am still unable to see any killer-application. Why is it so? Or is there some application/product (commercial/open-source) already existing, which actually is doing all that being boasted of? To put it more precisely, is there any product that leverages semantic technologies (esp RDF/OWL/SPARQL) and is delivering functionality/performance/maintainability, which would not have been possible with the existing (no-semantic) technologies? Some product that is completely dependent on semantic technologies and really adds value to the customers and generating revenues?

    Read the article

  • Bacula windows client could not connect to Bacula director

    - by pr0f-r00t
    I have a Bacula server on my Linux Debian squeeze host (Bacula version 5.0.2) and a Bacula client on Windows XP SP3. On my network each client can see each other, can share files and can ping. On my local server I could run bconsole and the server responds but when I run bconsole or bat on my windows client the server does not respond. Here are my configuration files: bacula-dir.conf: # # Default Bacula Director Configuration file # # The only thing that MUST be changed is to add one or more # file or directory names in the Include directive of the # FileSet resource. # # For Bacula release 5.0.2 (28 April 2010) -- debian squeeze/sid # # You might also want to change the default email address # from root to your address. See the "mail" and "operator" # directives in the Messages resource. # Director { # define myself Name = nima-desktop-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 1 Password = "Cv70F6pf1t6pBopT4vQOnigDrR0v3L" # Console password Messages = Daemon DirAddress = 127.0.0.1 # DirAddress = 72.16.208.1 } JobDefs { Name = "DefaultJob" Type = Backup Level = Incremental Client = nima-desktop-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultJob" } #Job { # Name = "BackupClient2" # Client = nima-desktop2-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=nima-desktop-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /nonexistant/path/to/file/archive/dir/bacula-restores } # job for vmware windows host Job { Name = "nimaxp-fd" Type = Backup Client = nimaxp-fd FileSet = "nimaxp-fs" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = Default Write Bootstrap = "/var/bacula/working/rsys-win-www-1-fd.bsr" #Change this } # job for vmware windows host Job { Name = "arg-michael-fd" Type = Backup Client = nimaxp-fd FileSet = "arg-michael-fs" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = Default Write Bootstrap = "/var/bacula/working/rsys-win-www-1-fd.bsr" #Change this } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } # # Put your list of files here, preceded by 'File =', one per line # or include an external list with: # # File = <file-name # # Note: / backs up everything on the root partition. # if you have other partitions such as /usr or /home # you will probably want to add them too. # # By default this is defined to point to the Bacula binary # directory to give a reasonable FileSet to backup to # disk storage during initial testing. # File = /usr/sbin } # # If you backup the root directory, the following two excluded # files can be useful # Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "nimaxp-fs" Enable VSS = yes Include { Options { signature = MD5 } File = "C:\softwares" File = C:/softwares File = "C:/softwares" } } # List of files to be backed up FileSet { Name = "arg-michael-fs" Enable VSS = yes Include { Options { signature = MD5 } File = "C:\softwares" File = C:/softwares File = "C:/softwares" } } # # When to do the backups, full backup on first sunday of the month, # differential (i.e. incremental since full) every other sunday, # and incremental backups other days Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = nima-desktop-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "_MOfxEuRzxijc0DIMcBqtyx9iW1tzE7V6" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # Client file service for vmware windows host Client { Name = nimaxp-fd Address = nimaxp FDPort = 9102 Catalog = MyCatalog Password = "Ku8F1YAhDz5EMUQjiC9CcSw95Aho9XbXailUmjOaAXJP" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # Client file service for vmware windows host Client { Name = arg-michael-fd Address = 192.168.0.61 FDPort = 9102 Catalog = MyCatalog Password = "b4E9FU6s/9Zm4BVFFnbXVKhlyd/zWxj0oWITKK6CALR/" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = nima-desktop2-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "_MOfxEuRzxijc0DIMcBqtyx9iW1tzE7V62" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" Device = FileStorage Media Type = File } # Definition of DDS tape storage device #Storage { # Name = DDS-4 # Do not use "localhost" here # Address = localhost # N.B. Use a fully qualified name here # SDPort = 9103 # Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" # password for Storage daemon # Device = DDS-4 # must be same as Device in Storage daemon # Media Type = DDS-4 # must be same as MediaType in Storage daemon # Autochanger = yes # enable for autochanger device #} # Definition of 8mm tape storage device #Storage { # Name = "8mmDrive" # Do not use "localhost" here # Address = localhost # N.B. Use a fully qualified name here # SDPort = 9103 # Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" # Device = "Exabyte 8mm" # MediaType = "8mm" #} # Definition of DVD storage device #Storage { # Name = "DVD" # Do not use "localhost" here # Address = localhost # N.B. Use a fully qualified name here # SDPort = 9103 # Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" # Device = "DVD Writer" # MediaType = "DVD" #} # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; dbuser = ""; dbpassword = "" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard # # NOTE! If you send to two email or more email addresses, you will need # to replace the %r in the from field (-f part) with a single valid # email address in both the mailcommand and the operatorcommand. # What this does is, it sets the email address that emails would display # in the FROM field, which is by default the same email as they're being # sent to. However, if you send email to more than one address, then # you'll have to set the FROM address manually, to a single address. # for example, a '[email protected]', is better since that tends to # tell (most) people that its coming from an automated source. # mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = nima-desktop-mon Password = "-T0h6HCXWYNy0wWqOomysMvRGflQ_TA6c" CommandACL = status, .status } bacula-fd.conf on client: # # Default Bacula File Daemon Configuration file # # For Bacula release 5.0.3 (08/05/10) -- Windows MinGW32 # # There is not much to change here except perhaps the # File daemon Name # # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = nimaxp-fd FDport = 9102 # where we listen for the director WorkingDirectory = "C:\\Program Files\\Bacula\\working" Pid Directory = "C:\\Program Files\\Bacula\\working" # Plugin Directory = "C:\\Program Files\\Bacula\\plugins" Maximum Concurrent Jobs = 10 } # # List Directors who are permitted to contact this File daemon # Director { Name = Nima-desktop-dir Password = "Cv70F6pf1t6pBopT4vQOnigDrR0v3L" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = nimaxp-mon Password = "q5b5g+LkzDXorMViFwOn1/TUnjUyDlg+gRTBp236GrU3" Monitor = yes } # Send all messages except skipped files back to Director Messages { Name = Standard director = Nima-desktop = all, !skipped, !restored } I have checked my firewall and disabled the firewall but it doesn't work.

    Read the article

  • Handler invocation speed: Objective-C vs virtual functions

    - by Kerido
    I heard that calling a handler (delegate, etc.) in Objective-C can be even faster than calling a virtual function in C++. Is it really correct? If so, how can that be? AFAIK, virtual functions are not that slow to call. At least, this is my understanding of what happens when a virtual function is called: Compute the index of the function pointer location in vtbl. Obtain the pointer to vtbl. Dereference the pointer and obtain the beginning of the array of function pointers. Offset (in pointer scale) the beginning of the array with the index value obtained on step 1. Issue a call instruction. Unfortunately, I don't know Objective-C so it's hard for me to compare performance. But at least, the mechanism of a virtual function call doesn't look that slow, right? How can something other than static function call be faster?

    Read the article

  • Concatenate boost::dynamic_bitset or std::bitset

    - by MOnsDaR
    Hey, what is the best way to concatenate 2 bitsets? For example i've got boost::dynamic_bitset<> test1( std::string("1111") ); boost::dynamic_bitset<> test2( std::string("00") ); they should be concatenated into a thrid Bitset test3 which then holds 111100 Solutions should use boost::dynamic_bitset. If the solution works with std::bitset, it would be nice too. There should be a focus on performance when concatenating the bits.

    Read the article

  • problems mounting an external IDE drive via USB in ubuntu

    - by Roy Rico
    I am having a problem connecting a specific IDE drive to my linux box. It's an old drive which I just want to get about 3 GB of files off of. INFO I am trying to connect a 200GB IDE Maxtor Drive, internally and externally... externally: I am using an self powered USB IDE external drive enclosure which I have used to connect various drives, under ubuntu and windows, in the past. The other posts stated it coudl be a problem I think i may have formatted the /dev/sdc partition instead of /dev/sdc1 partition when i originally formatted the drive. internally: I only have one machine left that has an internal IDE interface, and it's got XP on it. I plugged this drive internally into this machine with windows XP and used the ext2/ext3 drivers to mount this drive, but some files have question marks (?) in the file names which is messing up my copy process in windows. I can't delete the files under windows. Ubuntu Linux will not install on my only remaining machine that has IDE controller. I have tried the suggestions in the questions below http://superuser.com/questions/88182/mount-an-external-drive-in-ubuntu http://superuser.com/questions/23210/ubuntu-fails-to-mount-usb-drive it looks like i can see the drive in /proc/partitions $ cat /proc/partitions major minor #blocks name 8 0 78125000 sda 8 1 74894998 sda1 8 2 1 sda2 8 5 3229033 sda5 8 16 199148544 sdb <-- could be my drive? but it's not listed under fdisk -l $ fdisk -l Disk /dev/sda: 80.0 GB, 80000000000 bytes 255 heads, 63 sectors/track, 9726 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xd0f4738c Device Boot Start End Blocks Id System /dev/sda1 * 1 9324 74894998+ 83 Linux /dev/sda2 9325 9726 3229065 5 Extended /dev/sda5 9325 9726 3229033+ 82 Linux swap / Solaris and here is my log of /var/log/messages. with a bunch of weird output, can someone let me know what that weird output is? Mar 3 19:49:40 mala kernel: [687455.112029] usb 1-7: new high speed USB device using ehci_hcd and address 3 Mar 3 19:49:41 mala kernel: [687455.248576] usb 1-7: configuration #1 chosen from 1 choice Mar 3 19:49:41 mala kernel: [687455.267450] Initializing USB Mass Storage driver... Mar 3 19:49:41 mala kernel: [687455.269180] scsi4 : SCSI emulation for USB Mass Storage devices Mar 3 19:49:41 mala kernel: [687455.269410] usbcore: registered new interface driver usb-storage Mar 3 19:49:41 mala kernel: [687455.269416] USB Mass Storage support registered. Mar 3 19:49:46 mala kernel: [687460.270917] scsi 4:0:0:0: Direct-Access Maxtor 6 Y200P0 YAR4 PQ: 0 ANSI: 2 Mar 3 19:49:46 mala kernel: [687460.271485] sd 4:0:0:0: Attached scsi generic sg2 type 0 Mar 3 19:49:46 mala kernel: [687460.278858] sd 4:0:0:0: [sdb] 398297088 512-byte logical blocks: (203 GB/189 GiB) Mar 3 19:49:46 mala kernel: [687460.280866] sd 4:0:0:0: [sdb] Write Protect is off Mar 3 19:50:16 mala kernel: [687460.283784] sdb: Mar 3 19:50:16 mala kernel: [687491.112020] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:50:47 mala kernel: [687522.120030] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:51:18 mala kernel: [687553.112034] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:51:49 mala kernel: [687584.116025] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:52:02 mala kernel: [687596.170632] type=1505 audit(1267671122.035:31): operation="profile_replace" pid=8426 name=/usr/lib/cups/backend/cups-pdf Mar 3 19:52:02 mala kernel: [687596.171551] type=1505 audit(1267671122.035:32): operation="profile_replace" pid=8426 name=/usr/sbin/cupsd Mar 3 19:52:06 mala kernel: [687600.908056] async/0 D c08145c0 0 7655 2 0x00000000 Mar 3 19:52:06 mala kernel: [687600.908062] e5601d38 00000046 e5774000 c08145c0 e4c2a848 c08145c0 d203973a 0002713d Mar 3 19:52:06 mala kernel: [687600.908072] c08145c0 c08145c0 e4c2a848 c08145c0 00000000 0002713d c08145c0 f0a98c00 Mar 3 19:52:06 mala kernel: [687600.908079] e4c2a5b0 c20125c0 00000002 e5601d80 e5601d44 c056f3be e5601d78 e5601d4c Mar 3 19:52:06 mala kernel: [687600.908087] Call Trace: Mar 3 19:52:06 mala kernel: [687600.908099] [<c056f3be>] io_schedule+0x1e/0x30 Mar 3 19:52:06 mala kernel: [687600.908107] [<c01b2cf5>] sync_page+0x35/0x40 Mar 3 19:52:06 mala kernel: [687600.908111] [<c056f8f7>] __wait_on_bit_lock+0x47/0x90 Mar 3 19:52:06 mala kernel: [687600.908115] [<c01b2cc0>] ? sync_page+0x0/0x40 Mar 3 19:52:06 mala kernel: [687600.908121] [<c020f390>] ? blkdev_readpage+0x0/0x20 Mar 3 19:52:06 mala kernel: [687600.908125] [<c01b2ca9>] __lock_page+0x79/0x80 Mar 3 19:52:06 mala kernel: [687600.908130] [<c015c130>] ? wake_bit_function+0x0/0x50 Mar 3 19:52:06 mala kernel: [687600.908135] [<c01b459f>] read_cache_page_async+0xbf/0xd0 Mar 3 19:52:06 mala kernel: [687600.908139] [<c01b45c2>] read_cache_page+0x12/0x60 Mar 3 19:52:06 mala kernel: [687600.908144] [<c0232dca>] read_dev_sector+0x3a/0x80 Mar 3 19:52:06 mala kernel: [687600.908148] [<c0233d3e>] adfspart_check_ICS+0x1e/0x160 Mar 3 19:52:06 mala kernel: [687600.908152] [<c023339f>] ? disk_name+0xaf/0xc0 Mar 3 19:52:06 mala kernel: [687600.908157] [<c0233d20>] ? adfspart_check_ICS+0x0/0x160 Mar 3 19:52:06 mala kernel: [687600.908161] [<c02334de>] check_partition+0x10e/0x180 Mar 3 19:52:06 mala kernel: [687600.908165] [<c02335f6>] rescan_partitions+0xa6/0x330 Mar 3 19:52:06 mala kernel: [687600.908171] [<c0312472>] ? kobject_get+0x12/0x20 Mar 3 19:52:06 mala kernel: [687600.908175] [<c0312472>] ? kobject_get+0x12/0x20 Mar 3 19:52:06 mala kernel: [687600.908180] [<c039fc43>] ? get_device+0x13/0x20 Mar 3 19:52:06 mala kernel: [687600.908185] [<c03c263f>] ? sd_open+0x5f/0x1b0 Mar 3 19:52:06 mala kernel: [687600.908189] [<c020fda0>] __blkdev_get+0x140/0x310 Mar 3 19:52:06 mala kernel: [687600.908194] [<c020f0ac>] ? bdget+0xec/0x100 Mar 3 19:52:06 mala kernel: [687600.908198] [<c020ff7a>] blkdev_get+0xa/0x10 Mar 3 19:52:06 mala kernel: [687600.908202] [<c0232f30>] register_disk+0x120/0x140 Mar 3 19:52:06 mala kernel: [687600.908207] [<c0308b4d>] ? blk_register_region+0x2d/0x40 Mar 3 19:52:06 mala kernel: [687600.908211] [<c03084f0>] ? exact_match+0x0/0x10 Mar 3 19:52:06 mala kernel: [687600.908216] [<c0308cf0>] add_disk+0x80/0x140 Mar 3 19:52:06 mala kernel: [687600.908221] [<c03084f0>] ? exact_match+0x0/0x10 Mar 3 19:52:06 mala kernel: [687600.908225] [<c0308860>] ? exact_lock+0x0/0x20 Mar 3 19:52:06 mala kernel: [687600.908230] [<c03c53df>] sd_probe_async+0xff/0x1c0

    Read the article

  • Long-running transactions structured approach

    - by disown
    I'm looking for a structured approach to long-running (hours or more) transactions. As mentioned here, these type of interactions are usually handled by optimistic locking and manual merge strategies. It would be very handy to have some more structured approach to this type of problem using standard transactions. Various long-running interactions such as user registration, order confirmation etc. all have transaction-like semantics, and it is both error-prone and tedious to invent your own fragile manual roll-back and/or time-out/clean-up strategies. Taking a RDBMS as an example, I realize that it would be a major performance cost associated with keeping all the transactions open. As an alternative, I could imagine having a database supporting two isolation levels/strategies simultaneously, one for short-running and one for long-running conversations. Long-running conversations could then for instance have more strict limitations on data access to facilitate them taking more time (read-only semantics on some data, optimistic locking semantics etc). Are there any solutions which could do something similar?

    Read the article

  • grammar parser lexer antlr letteral

    - by BB
    What's the difference between this grammar: ... if_statement : 'if' condition 'then' statement 'else' statement 'end_if'; ... and this: ... if_statement : IF condition THEN statement ELSE statement END_IF; ... IF : 'if'; THEN: 'then'; ELSE: 'else'; END_IF: 'end_if'; .... ? If there is any difference, as this impacts on performance ... Thanks

    Read the article

  • OpenGL basics: calling glDrawElements once per object

    - by Bethor
    Hi all, continuing on from my explorations of the basics of OpenGL (see this question), I'm trying to figure out the basic principles of drawing a scene with OpenGL. I am trying to render a simple cube repeated n times in every direction. My method appears to yield terrible performance : 1000 cubes brings performance below 50fps (on a QuadroFX 1800, roughly a GeForce 9600GT). My method for drawing these cubes is as follows: done once: set up a vertex buffer and array buffer containing my cube vertices in model space set up an array buffer indexing the cube for drawing as 12 triangles done for each frame: update uniform values used by the vertex shader to move all cubes at once done for each cube, for each frame: update uniform values used by the vertex shader to move each cube to its position call glDrawElements to draw the positioned cube Is this a sane method ? If not, how does one go about something like this ? I'm guessing I need to minimize calls to glUniform, glDrawElements, or both, but I'm not sure how to do that. Full code for my little test : (depends on gletools and pyglet) I'm aware that my init code (at least) is really ugly; I'm concerned with the rendering code for each frame right now, I'll move to something a little less insane for the creation of the vertex buffers and such later on. import pyglet from pyglet.gl import * from pyglet.window import key from numpy import deg2rad, tan from gletools import ShaderProgram, FragmentShader, VertexShader, GeometryShader vertexData = [-0.5, -0.5, -0.5, 1.0, -0.5, 0.5, -0.5, 1.0, 0.5, -0.5, -0.5, 1.0, 0.5, 0.5, -0.5, 1.0, -0.5, -0.5, 0.5, 1.0, -0.5, 0.5, 0.5, 1.0, 0.5, -0.5, 0.5, 1.0, 0.5, 0.5, 0.5, 1.0] elementArray = [2, 1, 0, 1, 2, 3,## back face 4, 7, 6, 4, 5, 7,## front face 1, 3, 5, 3, 7, 5,## top face 2, 0, 4, 2, 4, 6,## bottom face 1, 5, 4, 0, 1, 4,## left face 6, 7, 3, 6, 3, 2]## right face def toGLArray(input): return (GLfloat*len(input))(*input) def toGLushortArray(input): return (GLushort*len(input))(*input) def initPerspectiveMatrix(aspectRatio = 1.0, fov = 45): frustumScale = 1.0 / tan(deg2rad(fov) / 2.0) fzNear = 0.5 fzFar = 300.0 perspectiveMatrix = [frustumScale*aspectRatio, 0.0 , 0.0 , 0.0 , 0.0 , frustumScale, 0.0 , 0.0 , 0.0 , 0.0 , (fzFar+fzNear)/(fzNear-fzFar) , -1.0, 0.0 , 0.0 , (2*fzFar*fzNear)/(fzNear-fzFar), 0.0 ] return perspectiveMatrix class ModelObject(object): vbo = GLuint() vao = GLuint() eao = GLuint() initDone = False verticesPool = [] indexPool = [] def __init__(self, vertices, indexing): super(ModelObject, self).__init__() if not ModelObject.initDone: glGenVertexArrays(1, ModelObject.vao) glGenBuffers(1, ModelObject.vbo) glGenBuffers(1, ModelObject.eao) glBindVertexArray(ModelObject.vao) initDone = True self.numIndices = len(indexing) self.offsetIntoVerticesPool = len(ModelObject.verticesPool) ModelObject.verticesPool.extend(vertices) self.offsetIntoElementArray = len(ModelObject.indexPool) ModelObject.indexPool.extend(indexing) glBindBuffer(GL_ARRAY_BUFFER, ModelObject.vbo) glEnableVertexAttribArray(0) #position glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0) glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ModelObject.eao) glBufferData(GL_ARRAY_BUFFER, len(ModelObject.verticesPool)*4, toGLArray(ModelObject.verticesPool), GL_STREAM_DRAW) glBufferData(GL_ELEMENT_ARRAY_BUFFER, len(ModelObject.indexPool)*2, toGLushortArray(ModelObject.indexPool), GL_STREAM_DRAW) def draw(self): glDrawElements(GL_TRIANGLES, self.numIndices, GL_UNSIGNED_SHORT, self.offsetIntoElementArray) class PositionedObject(object): def __init__(self, mesh, pos, objOffsetUf): super(PositionedObject, self).__init__() self.mesh = mesh self.pos = pos self.objOffsetUf = objOffsetUf def draw(self): glUniform3f(self.objOffsetUf, self.pos[0], self.pos[1], self.pos[2]) self.mesh.draw() w = 800 h = 600 AR = float(h)/float(w) window = pyglet.window.Window(width=w, height=h, vsync=False) window.set_exclusive_mouse(True) pyglet.clock.set_fps_limit(None) ## input forward = [False] left = [False] back = [False] right = [False] up = [False] down = [False] inputs = {key.Z: forward, key.Q: left, key.S: back, key.D: right, key.UP: forward, key.LEFT: left, key.DOWN: back, key.RIGHT: right, key.PAGEUP: up, key.PAGEDOWN: down} ## camera camX = 0.0 camY = 0.0 camZ = -1.0 def simulate(delta): global camZ, camX, camY scale = 10.0 move = scale*delta if forward[0]: camZ += move if back[0]: camZ += -move if left[0]: camX += move if right[0]: camX += -move if up[0]: camY += move if down[0]: camY += -move pyglet.clock.schedule(simulate) @window.event def on_key_press(symbol, modifiers): global forward, back, left, right, up, down if symbol in inputs.keys(): inputs[symbol][0] = True @window.event def on_key_release(symbol, modifiers): global forward, back, left, right, up, down if symbol in inputs.keys(): inputs[symbol][0] = False ## uniforms for shaders camOffsetUf = GLuint() objOffsetUf = GLuint() perspectiveMatrixUf = GLuint() camRotationUf = GLuint() program = ShaderProgram( VertexShader(''' #version 330 layout(location = 0) in vec4 objCoord; uniform vec3 objOffset; uniform vec3 cameraOffset; uniform mat4 perspMx; void main() { mat4 translateCamera = mat4(1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, cameraOffset.x, cameraOffset.y, cameraOffset.z, 1.0f); mat4 translateObject = mat4(1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, objOffset.x, objOffset.y, objOffset.z, 1.0f); vec4 modelCoord = objCoord; vec4 positionedModel = translateObject*modelCoord; vec4 cameraPos = translateCamera*positionedModel; gl_Position = perspMx * cameraPos; }'''), FragmentShader(''' #version 330 out vec4 outputColor; const vec4 fillColor = vec4(1.0f, 1.0f, 1.0f, 1.0f); void main() { outputColor = fillColor; }''') ) shapes = [] def init(): global camOffsetUf, objOffsetUf with program: camOffsetUf = glGetUniformLocation(program.id, "cameraOffset") objOffsetUf = glGetUniformLocation(program.id, "objOffset") perspectiveMatrixUf = glGetUniformLocation(program.id, "perspMx") glUniformMatrix4fv(perspectiveMatrixUf, 1, GL_FALSE, toGLArray(initPerspectiveMatrix(AR))) obj = ModelObject(vertexData, elementArray) nb = 20 for i in range(nb): for j in range(nb): for k in range(nb): shapes.append(PositionedObject(obj, (float(i*2), float(j*2), float(k*2)), objOffsetUf)) glEnable(GL_CULL_FACE) glCullFace(GL_BACK) glFrontFace(GL_CW) glEnable(GL_DEPTH_TEST) glDepthMask(GL_TRUE) glDepthFunc(GL_LEQUAL) glDepthRange(0.0, 1.0) glClearDepth(1.0) def update(dt): print pyglet.clock.get_fps() pyglet.clock.schedule_interval(update, 1.0) @window.event def on_draw(): with program: pyglet.clock.tick() glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT) glUniform3f(camOffsetUf, camX, camY, camZ) for shape in shapes: shape.draw() init() pyglet.app.run()

    Read the article

< Previous Page | 638 639 640 641 642 643 644 645 646 647 648 649  | Next Page >