Search Results

Search found 33454 results on 1339 pages for 'access token'.

Page 565/1339 | < Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >

  • GoogleAuthUtil: Daily Limit for Unauthenticated Use Exceeded

    - by Copa
    I am using the Google Client API and the GoogleAuthUtil.class to get access to the user's Google Drive Account. String scope = "oauth2:" + DriveScopes.DRIVE; String token = GoogleAuthUtil.getToken(getContext(), account.name, scope); This is the whole magic. It worked the whole day but since a couple of hours I receive the following message when sending API calls: com.google.api.client.googleapis.json.GoogleJsonResponseException: 403 Forbidden { "code": 403, "errors": [ { "domain": "usageLimits", "message": "Daily Limit for Unauthenticated Use Exceeded. Continued use requires signup.", "reason": "dailyLimitExceededUnreg", "extendedHelp": "https://code.google.com/apis/console" } ], "message": "Daily Limit for Unauthenticated Use Exceeded. Continued use requires signup." } I dont know how to use a API Key from the console instead of an oauth2 authentication. There are two different "getToken()" messages. One has four parameters and the description for the last one says: extras: Bundle containing additional information that may be relevant to the authentication scope. But what do these information should look like? What informations do I have to put in the Bundle?

    Read the article

  • Accessing we.config from Sharepoint web part

    - by philj
    I have a VS 2008 web parts project - in this project is a web.config file: something like this: ……. In my web part I am trying to access values in the appSetting section: I've tried all of the code below and each returns null: string Owner = ConfigurationManager.AppSettings.Get("MFOwner"); string stuff1 = ConfigurationManager.AppSettings["MFOwner"]; string stuff3 = WebConfigurationManager.AppSettings["MFOwner"]; string stuff4 = WebConfigurationManager.AppSettings.Get("MFOwner"); string stuff2 = ConfigurationManager.AppSettings["MFowner".ToString()]; I've tried this code I found: NameValueCollection sAll; sAll = ConfigurationManager.AppSettings; string a; string b; foreach (string s in sAll.AllKeys) { a = s; b = sAll.Get(s); } and stepped through it in debug mode - that is getting things like : FeedCacheTime FeedPageURL FeedXsl1 ReportViewerMessages which is NOT coming from anything in my web.config file....maybe a config file in sharepoint itself? How do I access a web.config (or any other kind of config file!) local to my web part??? thanks, Phil J

    Read the article

  • Help with accessing a pre-existing window AFTER opener is refreshed!

    - by Wilhelm Murdoch
    Alright, I'm at my wit's end on this issue. First, backstory. I'm working on a video management system where we're allowing users, when adding new content, to upload and, optionally, transcode a media file. We're using Java applet for the browser-based FTP client. What I want to do is allow a user to initiate an upload and then send the FTP connection instance to a popup window. This window will act as a job queue for the FTP transfer process. This will allow users to move about the main interface without having to stay on the original page until an individual file transfer is complete. For the most part I have all of this working, but here's a problem. If the window is closed, all connections are dropped and the upload process for all queued files will be canceled. So, if Window One opens the Popup Window, adds stuff to the queue, refreshes the screen or moves to a different page, how will I access the Popup Window? The popup window and its contents must remain persistent while the user navigates through the original window. The original window must be able to access the popup to add a new job to the queue. The popup window itself is independent of the opening window, so communication only happens in one direction: Parent - Popup Not Parent <- Popup Window.open(null, 'WINDOW_NAME'); will not work in this case. I need to check if a window exists BEFORE using window.open. Help!?!?

    Read the article

  • activemerchant PayPalExpress transaction is invalid

    - by Ameya Savale
    I am trying to integrate activemerchant into my ruby on rails application. This is my controller where I get the purchase attirbutes and create a PaypalExpressResponse object def checkout total_as_cents, purchase_params = get_setup_params(Schedule.find(params[:schedule]), request) setup_response = @gateway.setup_purchase(total_as_cents, purchase_params) redirect_to @gateway.redirect_url_for(setup_response.token) end @gateway is my PaypalExpressGateway object which I create using this method in my controller def assign_gateway @gateway = PaypalExpressGateway.new( :login => api_user, :password => api_pass, :signature => api_signature ) end I got the api_user, api_pass, and api_signature values from my developer.paypal.com account, when I logged in for the first time there was already a sandbox user created as a merchant which is where I got the api credentials from. And finally here is my get_setup_params method: def get_setup_params(schedule, request) purchase_params = { :ip => request.remote_ip, :return_url => url_for(:action => 'review', :only_path => false, :sched => schedule.id), :cancel_return_url => register_path, :allow_note => true, :item => schedule.id } return to_cents(schedule.fee), purchase_params end How ever when I click on the checkout button, I get redirected to a sandbox paypal page saying "This transaction is invalid. Please return to the recipient's website to complete your transaction using their regular checkout flow." I'm not sure exactly what's wrong, I think the problem lies in the credentials but don't know why. Any help will be appreciated. One other point, I'm running this in my development environment so I have put this in my config file config.after_initialize do ActiveMerchant::Billing::Base.mode = :test end UPDATE Found out what the problem was, my return cancel url was invalid instead of using register_path, I used url_for(action: "action-name", :only_path => false) this answer helped me Rails ActiveMerchant - Paypal Express Checkout Error even though I wasn't able to see the output of the response like the person has managed to do

    Read the article

  • How to upload video to favorite/playlist using gdata in objective c

    - by Swati
    hi, i am trying to upload a video to favorite in my account but it shows Invalid request Uri and status code =400 i dont understand how should i format my request my code NSURL *url = [NSURL URLWithString: http://gdata.youtube.com/feeds/api/users/username/favorite]; ASIFormDataRequest *request = [ASIFormDataRequest requestWithURL:url]; [request setPostValue:@"gdata.youtube.com" forKey:@"Host"]; [request setPostValue:@"application/atom+xml" forKey:@"Content-Type"]; [request setPostValue:@"CONTENT_LENGTH" forKey:@"Content-Length"]; [request setPostValue:@"" forKey:@"AuthSubToken"]; [request setPostValue:@"2" forKey:@"GData-Version"]; [request setPostValue:developer_key forKey:@"X-GData-Key"]; [request setPostValue:xml_data forKey:@"API_XML_Request"]; [request setDelegate:self]; [request setDidFailSelector:@selector(requestFailed:)]; [request setDidFinishSelector:@selector(gotTheResponse:)]; [[networkQueue go]; i have auth token and developer key, VIDEO_ID.but m not sure how to pass xml data in post request: <?xml version="1.0" encoding="UTF-8"?> <entry xmlns="http://www.w3.org/2005/Atom"> <id>VIDEO_ID</id> </entry> NSString *xml_data = contains xml data in string form

    Read the article

  • NUnit integration programmatically with spring

    - by harkon
    Hi! I have a component based architecture framework designed and I use NUnit for isolated testing - okay so far. Now I want to enable integration tests. Therefore the tests use real implementations of the existing components. Each element of the component has a life cycle (init, start and stop) and I created a NUnit component. In the start section the Console runner of the NUnit will be executed. Okay - now if I have a test fixture class in my dlls in the execution path the runner exectues them - fine! But: And this is crucial! Each to be tested implementation exists so far in the process and I want to use this instances for testing. If I use NUnit runner in the current way each instance will be created twice - and above all: I have a spring container and a implementation registry. Via this registry I can get access to all instances in the processes. But how do I give the test fixture access to the existing registry? Good: I can start the component architecture framework in the startup of the nunit runner - but this is not what I want. My guide is the apache Cactus framework (with JUnit and tomcat, JBoss etc.) Can someone help? Thanks a lot! Check: http://cone.codeplex.com

    Read the article

  • Create an instance of an exported C++ class from Delphi

    - by Alan G.
    I followed an excellent article by Rudy Velthuis about using C++ classes in DLL's. Everything was golden, except that I need access to some classes that do not have corresponding factories in the C++ DLL. How can I construct an instance of a class in the DLL? The classes in question are defined as class __declspec(dllexport) exampleClass { public: void foo(); }; Now without a factory, I have no clear way of instantiating the class, but I know it can be done, as I have seen SWIG scripts (.i files) that make these classes available to Python. If Python&SWIG can do it, then I presume/hope there is some way to make it happen in Delphi too. Now I don't know much about SWIG, but it seems like it generates some sort of map for C++ mangled names? Is that anywhere near right? Looking at the exports from the DLL, I suppose I could access functions & constructor/destructor by index or the mangled name directly, but that would be nasty; and would it even work? Even if I can call the constructor, how can I do the equivalent of "new CClass();" in Delphi?

    Read the article

  • [Javascript] Linux Ajax (mootools Request.JSON) Header error

    - by VDVLeon
    Hi all, I use the following code to get some json data: var request = new Request.JSON( { 'url': sourceURI, 'onSuccess': onPageData } ); request.get(); Request.JSON is a class from Mootools (a javascript library). But on linux (ubuntu on firefox 3.5 and Chrome) the request always fails. So i tried to display the http request ajax is sending. (I used netcat to display it) The request is like this: OPTIONS /the+url HTTP/1.1 Host: example.com Connection: keep-alive User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US) AppleWebKit/532.3 (KHTML, like Gecko) Chrome/4.0.226.0 Safari/532.3 Referer: http://example.com/ref... Access-Control-Request-Method: GET Origin: http://example.com Access-Control-Request-Headers: X-Request, X-Requested-With, Accept Accept: */* Accept-Encoding: gzip,deflate Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 The HTTP request (first line) is not how it should be: OPTIONS /the+url HTTP/1.1 It should be: GET /the+url HTTP/1.1 Does anybody know why this problem is and how to fix it?

    Read the article

  • Ember nested route. Load more models

    - by user3568719
    JsBin http://jsbin.com/EveQOke/153/ I know how to load more then one model to a route, using Ember.RSVP.hash. (see Jsbin Children menu). I use dynamic part to access one elem from a collection children/1. But i cant load more models to a nested resource. In my example i want to populate all the toys for a select, not just list the toys of the child. I have tried to access the model of the route children App.ChildRoute = Ember.Route.extend({ model: function(param){ return Ember.RSVP.hash({ allToys: this.modelFor("children"), child:this.store.find('child', param.child_id) }); } }); and use its model's toy property (since there have already loaded all of the toys) child.hbs <h4>All avaiable toys</h4> <table> {{#each toy in model.allToys.toys}} <tr> <td>{{toy.name}}</td> </tr> {{/each}} </table>

    Read the article

  • MSBuild / PowerShell: Copy SQL Server 2012 database to SQL Azure via BACPAC (for Continuous Integration)

    - by giveme5minutes
    I'm creating a continuous integration MSBuild script which copies a database in on-premise SQL Server 2012 to SQL Azure. Easy right? Methods After a fair bit of research I've come across the following methods: Use PowerShell to access the DAC library directly, then use the MSBuild PowerShell extension to wrap the script. This would require installing PowerShell 3 and working out how to make the MSBuild PowerShell extension work with it, as apparently MS moved the DAC API to a different namespace in the latest version of the library. PowerShell would give direct access to the API, but may require quite a bit of boilerplate. Use the sample DAC Framework Client Side Tools, which requires compiling them myself, as the downloads available from Codeplex only include the Hosted version. It would also require fixing them to use DAC 3.0 classes as they appear to currently use an earlier version of DAC. I could then call these tools from an <Exec Command="" /> in the MSBuild script. Less boilerplate and if I hit any bumps in the road I can just make changes to the source. Processes Using whichever method, the process could be either: Export from on-premise SQL Server 2012 to local BACPAC Upload BACPAC to blog storage Import BACPAC to SQL Azure via Hosted DAC Or: Export from on-premise SQL Server 2012 to local BACPAC Import BACPAC to SQL Azure via Client DAC Question All of the above seems to be quite a lot of effort for something that seems to be a standard feature... so before I start reinventing the wheel and documenting the results for all to see, is there something really obvious that I've missed here? Is there pre-written script that MS has released that I have not yet uncovered? There's an command in the GUI of SQL Server Management Studio 2012 that does EXACTLY what I'm trying to do (right click on local database, click "Tasks", click "Deploy Database to SQL Azure"). Surely if it's a few clicks in the GUI it must be a single command on the command line somewhere??

    Read the article

  • The following grammar is LL1, SLR, LR(1), LALR?

    - by Mike
    P - {D ; C} D - d; D| d C - c; C | c a) Is the grammar LL(1)? Explain your answer. b) Is the grammar SLR(1)? Explain your answer. c) Is the grammar LR(1)? Explain your answer. d) Is the grammar LALR? Explain your answer. As for my answers I actually got no for them all... so I'm thinking I did something wrong Here is my explanation. a) It is not LL(1) because it is not left factored. b) It is not SLR, because of the transition diagram item 2 ( which is... ) D- d . ; D D- d . We need to consult the follow set, Follow(D) = ; Therefore this is not SLR c) It is not LR(1) because of... item 1 P- {D.;C} , $ D- .d;D , ; D- .d , ; item 2 D- d.; D , ; D- d. , ; item 3 D- d; . D , ; D- .d;D , ; D- .d , ; Since item 2 goes to item 3 with ;, AND "D- d."'s (in item 2) look ahead token is also ;. this causes a reduce to shift conflict, therefore this grammar is not LR(1) d) This grammar is not LALR because it is not LR(1) Thanks for your help!

    Read the article

  • How to save timers with connection to OrderId?

    - by Adrian Serafin
    Hi! I have a system where clients can make orders. After making order they have 60 minutes to pay fot it before it will be deleted. On the server side when order is made i create timer and set elapsed time to 60 minutes System.Timer.Timers timer = new System.Timers.Timer(1000*60*60); timer.AutoReset = false; timer.Elapsed += HandleElapsed; timer.Start(); Because I want to be able to dispose timer if client decides to pay and I want to be able to cancel order if he doesn't I keep two Dictionaries: Dictionary<int, Timer> _orderTimer; Dictionary<Timer, int> _timerOrder; Then when client pay's I can access Timer by orderId with O(1) thanks to _orderTimer dictionary and when time elapsed I can access order with O(1) thanks to _timerOrder dictionary. My question is: Is it good aproach? Assuming that max number of rows I have to keep in dictionary in one moment will be 50000? Maybe it would be better to derive from Timer class, add property called OrderId, keep it all in List and search for order/timer using linq? Or maybe you I should do this in different way?

    Read the article

  • Exposed onsite vs IFD deployments for MS Dynamics CRM

    - by Greg McGuffey
    I'm working for the first time on a MS Dyanmics CRM 4.0 project. Our company has a high number of remote employees and even more remote consultants. As such it will be necessary to make the CRM solution available over the internet. As near as I can tell, I have three options: Have everyone use a VPN to access an intranet site (typical onsite deployment). However, we have found that VPNs are far from trouble free and cause many support issues. We avoid them like the plague. Use IFD to expose the CRM on the internet. I don't know much about this except that the URL will be different than the onsite URL, which could cause some headaches (see below). Expose the CRM site by opening the site to the internet, using SSL to encrypt traffic. We currently do this with our MS sharepoint sites. I'm not sure how secure this would be (one of the reasons for this question). I'd like to avoid using both the onsite intranet deployment and the IFD together for a couple of reasons. One of the requests for the solution is use email to notify users that they've been assigned a task, and include the URL to the task within the email. For this reason. If both deployments are used, then I'll need to include two URLs and the user would need to know which to use. Which leads to the second reason, the main users of the solution split time between being in the office and being remote. Thus they would need to access the solution two different ways, and know when to use which. Bad. So, what are the advantages/disadvantages of any of these methods? Any other options? Is there any issue using IFD from within the intranet? Security issues? Thanks!

    Read the article

  • Python and the self parameter

    - by Svend
    I'm having some issues with the self parameter, and some seemingly inconsistent behavior in Python is annoying me, so I figure I better ask some people in the know. I have a class, Foo. This class will have a bunch of methods, m1, through mN. For some of these, I will use a standard definition, like in the case of m1 below. But for others, it's more convinient to just assign the method name directly, like I've done with m2 and m3. import os def myfun(x, y): return x + y class Foo(): def m1(self, y, z): return y + z + 42 m2 = os.access m3 = myfun f = Foo() print f.m1(1, 2) print f.m2("/", os.R_OK) print f.m3(3, 4) Now, I know that os.access does not take a self parameter (seemingly). And it still has no issues with this type of assignment. However, I cannot do the same for my own modules (imagine myfun defined off in mymodule.myfun). Running the above code yields the following output: 3 True Traceback (most recent call last): File "foo.py", line 16, in <module> print f.m3(3, 4) TypeError: myfun() takes exactly 2 arguments (3 given) The problem is that, due to the framework I work in, I cannot avoid having a class Foo at least. But I'd like to avoid having my mymodule stuff in a dummy class. In order to do this, I need to do something ala def m3(self,a1, a2): return mymodule.myfun(a1,a2) Which is hugely redundant when you have like 20 of them. So, the question is, either how do I do this in a totally different and obviously much smarter way, or how can I make my own modules behave like the built-in ones, so it does not complain about receiving 1 argument too many.

    Read the article

  • How to "reduce" a hash?

    - by Julien Lebosquain
    Suppose I have any "long" hash, like a 16 bytes MD5 or a 20 bytes SHA1. I want to reduce this hash to fit on 4 bytes, for GetHashCode() purposes. First, I'm perfectly aware that I'll get more collisions. That's totally fine in my case, but I'd still prefer to get the less possible collisions. There are several solutions to my problem: I could take the 4 first bytes of the hash. I could take the 4 last bytes of the hash. I could take 4 random bytes of the hash. I could generate a hash of the hash, involving classic prime numbers multiplications. Are there other solutons I didn't think about? And more importantly, what method will give me the most unique hash code? I'm currently supposing they're almost equivalent. Microsoft choose that the public key token of an assembly is the last 8 bytes of the SHA1 hash of its public key, so I'll probably go for this solution but I'd like to know why.

    Read the article

  • What VC++ compiler/linker does when building a C++ project with Managed Extension

    - by ???
    The initial problem is that I tried to rebuild a C++ project with debug symbols and copied it to test machine, The output of the project is external COM server(.exe file). When calling the COM interface function, there's a RPC call failre: COMException(0x800706BE): The remote procedure call failed. According to the COM HRESULT design, if the FACILITY code is 7, it's actually a WIN32 error, and the win32 error code is 0x6BE, which is the above mentioned "remote procedure call failed". All I do is replace the COM server .exe file, the origin file works well. When I checked into the project, I found it's a C++ project with Managed Extension. When I checking the DLL with reflector, it shows there's 2 additional .NET assembly reference. Then I checked the project setting and found nothing about the extra 2 assembly reference. I turned on the show includes option of compiler and verbose library of linker, and try to analyze whether the assembly is indirectly referenced via .h file. I've collect all the .h file and grep all the files with '#using' '#import' and the assembly file itself. There really is a '#using ' in one of the .h file but not-relevant to the referenced assembly. And about the linked .lib library files, only one of the .lib file is a side-product of another managed-extension-enabled C++ project, all others are produced by a pure, traditional C++ project. For the managed-extension-enabled C++ project, I checked the output DLL assembly, it did NOT reference to the 2 assembly. I even try to capture the access of the additional assembly file via sysinternal's filemon and procmon, but the rebuild process does NOT access these file. I'm very confused about the compile and linking process model of a VC++/CLI project, where the additional assembly reference slipped into the final assembly? Thanks in advance for any of your help.

    Read the article

  • WCF client binding configuration in program code

    - by smarsha
    I have the following class that configures security, encoding, and token parameters but I am having trouble adding a BasicHttpBinding to specify a MaxReceivedMessageSize. Any insight would be appreciated. public class MultiAuthenticationFactorBinding { public static Binding CreateMultiFactorAuthenticationBinding() { HttpsTransportBindingElement httpTransport = new HttpsTransportBindingElement(); CustomBinding binding = new CustomBinding(); binding.Name = "myCustomBinding"; TransportSecurityBindingElement messageSecurity = TransportSecurityBindingElement.CreateUserNameOverTransportBindingElement(); messageSecurity.AllowInsecureTransport = true; messageSecurity.EnableUnsecuredResponse = true; messageSecurity.MessageSecurityVersion = MessageSecurityVersion.WSSecurity11WSTrust13WSSecureConversation13WSSecurityPolicy12; messageSecurity.SecurityHeaderLayout = SecurityHeaderLayout.Strict; messageSecurity.IncludeTimestamp = true; messageSecurity.SetKeyDerivation(false); TextMessageEncodingBindingElement Quota = new TextMessageEncodingBindingElement(MessageVersion.Soap11, System.Text.Encoding.UTF8); Quota.ReaderQuotas.MaxDepth = 32; Quota.ReaderQuotas.MaxStringContentLength = Int32.MaxValue; Quota.ReaderQuotas.MaxArrayLength = 16384; Quota.ReaderQuotas.MaxBytesPerRead = 4096; Quota.ReaderQuotas.MaxNameTableCharCount = 16384; X509SecurityTokenParameters clientX509SupportingTokenParameters = new X509SecurityTokenParameters(); clientX509SupportingTokenParameters.InclusionMode = SecurityTokenInclusionMode.AlwaysToRecipient; clientX509SupportingTokenParameters.RequireDerivedKeys = false; messageSecurity.EndpointSupportingTokenParameters.Endorsing.Add(clientX509SupportingTokenParameters); //binding.ReceiveTimeout = new TimeSpan(0,0,300); binding.Elements.Add(Quota); binding.Elements.Add(messageSecurity); binding.Elements.Add(httpTransport); return binding; } }

    Read the article

  • Unhandled Exception error message

    - by Joshua Green
    Does anyone know why including a term such as: t = PL_new_term_ref(); would cause an Unhandled Exception error message: 0xC0000005: Access violation reading location 0x0000000c. (Visual Studio 2008) I have a header file: class UserTaskProlog : public ArAction { public: UserTaskProlog( const char* name = " sth " ); ~UserTaskProlog( ); AREXPORT virtual ArActionDesired *fire( ArActionDesired currentDesired ); private: term_t t; }; and a cpp file: UserTaskProlog::UserTaskProlog( const char* name ) : ArAction( name, " sth " ) { char** argv; argv[ 0 ] = "libpl.dll"; PL_initialise( 1, argv ); PlCall( "consult( 'myProg.pl' )" ); } UserTaskProlog::~UserTaskProlog( ) { } ArActionDesired *UserTaskProlog::fire( ArActionDesired currentDesired ) { cout << " something " << endl; t = PL_new_term_ref( ); } Without t=PL_new_term_ref() everything works fine, but when I start adding my Prolog code (declarations first, such as t=PL_new_term_ref), I get this Access Violation error message. I'd appreciate any help. Thanks,

    Read the article

  • iPhone Localization: simple project not working

    - by gonso
    Hello Im doing my first localized project and I've been fighting with it for several hours with no luck. I have to create an app that, based on the user selection, shows texts and images in different languages. I've read most of Apple's documents on the matter but I cant make a simple example work. This are my steps so far: 1) Create a new project. 2) Manually create a "en.lproj" directory in the projects folder. 3) Using TexEdit create file called "Localizable.strings" and store it in Unicode UTF-16. The file looks like this: /* Localizable.strings Multilanguage02 Created by Gonzalo Floria on 5/6/10. Copyright 2010 __MyCompanyName__. All rights reserved. */ "Hello" = "Hi"; "Goodbye" = "Bye"; 4) I drag this file to the Resources Folder on XCode and it appear with the "subdir" "en" underneath it (with the dropdown triangle to the left). If I try to see it on XCode it looks all wrong, whit lots of ? symbols, but Im guessing thats because its a UTF-16 file. Right? 5) Now on my view did load I can access this strings like this: NSString *translated; translated = NSLocalizedString(@"Hello", @"User greetings"); NSLog(@"Translated text is %@",translated); My problem is allowing the user to switch language. I have create an es.lproj with the Localizable.strings file (in Spanish), but I CANT access it. I've tried this line: [[NSUserDefaults standardUserDefaults] setObject: [NSArray arrayWithObjects:@"es", nil] forKey:@"AppleLanguages"]; But that only works the NEXT time you load the application. Is there no way to allow the user to switch languages while running the application?? Do I have to implement my own Dictionary files and forget all about NSLocalizableString family? Thanks for ANY advice or pointers. Gonso

    Read the article

  • How to compare 2 complex spreadsheets running in parallel for consistency with each other?

    - by tbone
    I am working on converting a large number of spreadsheets to use a new 3rd party data access library (converting from third party library #1 to third party library #2). fyi: a call to a UDF (user defined function) is placed in a cell, and when that is refreshed, it pulls the data into a pivot table below the formula. Both libraries behave the same and produce the same output, except, small irregularites can arise, such as an additional field being shown in the output pivot table using library #2, which can affect formulas on the sheet if data is being read from the pivot table without using GetPivotData. So I have ~100 of these very complicated (20+ worksheets per workbook) spreadsheets that I have to convert, and run in parallel for a period of time, to see if the output using the new data access library matches the old library. Is there some clever approach to do this, so I don't have to spend a large amount of time analyzing each sheet to determine the specific elements to compare? Two rough ideas that come to mind: 1. just create a Validator workbook that has the same # of worksheets, and simply do a Worbook1!Worksheet1!A1 - Worbook2!Worksheet3!A1 for every possible cell on each sheet 2. roughly the equivalent of #1, but just traverse the cells in the 2 books using VBA, and log any cells that do not match. I don't particularly like either idea, can anyone think of something better than this, maybe some 3rd party utility I could buy?

    Read the article

  • using linq to sql

    - by user324831
    Well I am new to this orm stuff. We have to create a large project . I read about linq to sql . will it be appropiate to use it in the project of high risk . i found no problem with it personally but the thing is that there will be no going back once started.So i need some feedback from the orm gurus here at the msdn.Will entity framework will be better?( I am in doubt about link to sql because I have read and heard negative feedback here and there) I will be using mvc2 as the framework. So please give the feedback about linq to sql in this regard. q2) Also I am a fan of stored procedure as they are precomputed and fasten up the thing and I have never worked without them.I know that linq to sql support stored procedures but will it be feasible to give up stored procedure seeing the beautiful data access layer generated with little effort as we are also in a need of rapid development. q3) If some changes to some fields required in the database in Link to Sql how will the changes be accommodated in the data access layer.

    Read the article

  • using linq to sql

    - by mazhar
    Well I am new to this orm stuff. We have to create a large project . I read about linq to sql . will it be appropiate to use it in the project of high risk . i found no problem with it personally but the thing is that there will be no going back once started.So i need some feedback from the orm gurus here at the msdn.Will entity framework will be better?( I am in doubt about link to sql because I have read and heard negative feedback here and there) I will be using mvc2 as the framework. So please give the feedback about linq to sql in this regard. q2) Also I am a fan of stored procedure as they are precomputed and fasten up the thing and I have never worked without them.I know that linq to sql support stored procedures but will it be feasible to give up stored procedure seeing the beautiful data access layer generated with little effort as we are also in a need of rapid development. q3) If some changes to some fields required in the database in Link to Sql how will the changes be accommodated in the data access layer.

    Read the article

  • EJB and JPA and @OneToMany - Transaction too long?

    - by marioErr
    Hello. I'm using EJB and JPA, and when I try to access PhoneNumber objects in phoneNumbers attribute of Contact contact, it sometimes take several minutes for it to actually return data. It just returns no phoneNumbers, not even null, and then, after some time, when i call it again, it magically appears. This is how I access data: for (Contact c : contactFacade.findAll()) { System.out.print(c.getName()+" "+c.getSurname()+" : "); for (PhoneNumber pn : c.getPhoneNumbers()) { System.out.print(pn.getNumber()+" ("+pn.getDescription()+"); "); } } I'm using facade session ejb generated by netbeans (basic CRUD methods). It always prints correct name and surname, phonenumbers and description are only printed after some time (it varies) from creating it via facade. I'm guessing it has something to do with transactions. How to solve this? These are my JPA entities: contact @Entity public class Contact implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; private String name; private String surname; @OneToMany(cascade = CascadeType.REMOVE, mappedBy = "contact") private Collection<PhoneNumber> phoneNumbers = new ArrayList<PhoneNumber>(); phonenumber @Entity public class PhoneNumber implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; private String number; private String description; @ManyToOne() @JoinColumn(name="CONTACT_ID") private Contact contact;

    Read the article

  • What is the fastest way for reading huge files in Delphi?

    - by dummzeuch
    My program needs to read chunks from a huge binary file with random access. I have got a list of offsets and lengths which may have several thousand entries. The user selects an entry and the program seeks to the offset and reads length bytes. The program internally uses a TMemoryStream to store and process the chunks read from the file. Reading the data is done via a TFileStream like this: FileStream.Position := Offset; MemoryStream.CopyFrom(FileStream, Size); This works fine but unfortunately it becomes increasingly slower as the files get larger. The file size starts at a few megabytes but frequently reaches several tens of gigabytes. The chunks read are around 100 kbytes in size. The file's content is only read by my program. It is the only program accessing the file at the time. Also the files are stored locally so this is not a network issue. I am using Delphi 2007 on a Windows XP box. What can I do to speed up this file access?

    Read the article

  • RSKeyMgmt -r unable to remove installtion ID

    - by Eves
    I have backed up databases on one 2005 SQL Server and have restored those databases on a second 2005 SQL Server. I am currently trying to remove the new server's OLD key instance ID using the Reporting Services Key Manager (RSKeyMgmt -r). Prior to running the removal command the list of the current instances shows the new server's OLD instance ID as well as the NEW instance ID from the first SQL Server. Executing the RSKeyMgmt -r command results in: "The command completed successfully". However, when I recheck the listing of current instance IDs I see both the OLD and NEW instance IDs. In addition, when I check the Application Event Viewer I see an error: Report Server Windows Service (MSSQLSERVER) has not been granted access to the catalog content Does anyone know why I would be getting the above application error? Or...does anyone know what I would need to do to give access to the catalog to the Report Server Window Service? The first SQL Server where the databases were backed up is an Enterprise edition SQL Server and the second SQL Server where the databases were restored is Standard edition. Could this be the cause of the problem? Is there a way to make this backup and restore migration work?

    Read the article

< Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >