Search Results

Search found 22041 results on 882 pages for 'kill process'.

Page 730/882 | < Previous Page | 726 727 728 729 730 731 732 733 734 735 736 737  | Next Page >

  • Migrating to web application with AjaxControlToolkit

    - by Chris
    Hi All, We are currently migrating our ASP.NET website to a web application in Visual Studio 2008. Most of the process has been fairly straight forward, but I have hit one block that is driving me a bit nuts. We are using the AjaxControlToolkit for some functionality, specifically an AutoControlExtender. When this is run locally through VS's development server the extender (dropdown) does not render after the service returns the resultset. However if I deploy the migrated solution to our UAT server the extender functions correctly. I have ensured the Ajax Control Toolkit is properly installed locally on my dev machine (and the dll available in the bin directory), and using debugging have ensured the service is called correctly and runs through without error (which it does). The web application was taken from a server running IIS7. Can anyone confirm if Visual Studio 2008 development server requires a different configuration to IIS 7 (as I believe IIS 6 requires a different configuration to IIS 7), and if there is a resource that provides more info? My own searches have turned up very little in this area. On the other hand, if I am looking in the wrong area any other tips would be appreciated. Thanks Chris

    Read the article

  • Using the MVVM Light Toolkit to make Blendable applications

    - by Dave
    A while ago, I posted a question regarding switching between a Blend-authored GUI and a Visual Studio-authored one. I got it to work okay by adding my Blend project to my VS2008 project and then changing the Startup Application and recompiling. This would result in two applications that had completely different GUIs, yet used the exact same ViewModel and Model code. I was pretty happy with that. Now that I've learned about the Laurent Bugnion's MVVM Light Toolkit, I would really like to leverage his efforts to make this process of supporting multiple GUIs for the same backend code possible. The question is, does the toolkit facilate this, or am I stuck doing it my previous way? I've watched his video from MIX10 and have read some of the articles about it online. However, I've yet to see something that indicates that there is a clean way to allow a user to either dynamically switch GUIs on the fly by loading a different DLL. There are MVVM templates for VS2008 and Blend 3, but am I supposed to create both types of projects for my application and then reference specific files from my VS2008 solution? UPDATE I re-read some information on Laurent's site, and seemed to have forgotten that the whole point of the template was to allow the same solution to be opened in VS2008 and Blend. So anyhow, with this new perspective it looks like the templates are actually intended to use a single GUI, most likely designed entirely in Blend (with the convenience of debugging through VS2008), and then be able to use two different ViewModels -- one for design-time, and one for runtime. So it seems to me like the answer to my question is that I want to use a combination of my previous solution, along with the MVVM Light Toolkit. The former will allow me to make multiple, distinct GUIs around my core code, while the latter will make designing fancy GUIs in Blend easier with the usage of a design-time ViewModel. Can anyone comment on this?

    Read the article

  • WCF Double Hop questions about Security and Binding.

    - by Ken Maglio
    Background information: .Net Website which calls a service (aka external service) facade on an app server in the DMZ. This external service then calls the internal service which is on our internal app server. From there that internal service calls a stored procedure (Linq to SQL Classes), and passes the serialized data back though to the external service, and from there back to the website. We've done this so any communication goes through an external layer (our external app server) and allows interoperability; we access our data just like our clients consuming our services. We've gotten to the point in our development where we have completed the system and it all works, the double hop acts as it should. However now we are working on securing the entire process. We are looking at using TransportWithMessageCredentials. We want to have WS2007HttpBinding for the external for interoperability, but then netTCPBinding for the bridge through the firewall for security and speed. Questions: If we choose WS2007HttpBinding as the external services binding, and netTCPBinding for the internal service is this possible? I know WS-* supports this as does netTCP, however do they play nice when passing credential information like user/pass? If we go to Kerberos, will this impact anything? We may want to do impersonation in the future. If you can when you answer post any reference links about why you're answering the way you are, that would be very helpful to us. Thanks!

    Read the article

  • How to continuously scroll content within a DIV using jQuery?

    - by Camsoft
    Aim The aim is to a have a container DIV with a fixed height and width and have the HTML content within that DIV automatically scroll vertically continuously. Question Basically I've created the code below using jQuery to scroll (move) the child DIV vertically upwards until its outside the bounding parent box where the animation then completes which triggers an event handler which resets the position of the child DIV and starts the process again. This works fine, so the content scrolls up leaving a blank space and then starts from the bottom again and scrolls up. The problem I have is that the requirements for this is for the content to appear as if it was continuously repeating, see below diagram to better explain this, is there a way to do this? (I don't want to use 3rd party plug ins or libraries other than jQuery): What I have so far The HTML: <div id="scrollingContainer"> <div class="scroller"> <h1>This is a title</h1> <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse at orci mi, id gravida tellus. Integer malesuada ante sit amet enim pulvinar congue. Donec pulvinar dolor et arcu posuere feugiat id et felis.</p> <p>More content....</p> </div> </div> The CSS: #scrollingContainer{ height: 300px; width: 300px; overflow: hidden; } #scrollingContainer DIV.scroller{ position: relative; } The JavaScript: /** * Scrolls the content DIV */ function scroll() { if($('DIV.scroller').height() > $('#scrollingContainer').height()) { var t = $('DIV.scroller').position().top + $('DIV.scroller').height(); /* Animate */ $('DIV.scroller').animate( { top: '-=' + t + 'px' } , 4000, 'linear', animationComplete); } } function animationComplete() { $(this).css('top', $('#scrollingContainer').height()); scroll(); }

    Read the article

  • Regarding Toplink Fetching Policy

    - by Chandu
    Hi, I'm working for a Swing Project and the technologies used are netbeans with Toplink essentials, mysql. The Problem I'm facing is the entity object dosn't get updated after insertions take place while calling a getter collection of the foreign key property. Ex: I have 2 tables Table1,Table2. I have sno column, id column as a primary key in Table1 & is Foreign Key in Table2. Through find method I just get the particular sno object(existed in table 1) set some values persisted to table2 & committed the transaction. When I select the same sno object through find method & gets its collection from Table2 through the getTable2Collection() of the bean(as it is already created in bean by toplink essential) I'm unable to get the latest added record except that all other records of it are displayed. After I close the application & opening it then the new record gets reflected while calling the same sno through the above process. I came to know that this is a kind of lazy fetching and there should be some way of fetch policy to be changed to make the entity object get updated with the changes. So Please help me in this regard. Regards, Chandu

    Read the article

  • Entity Framework Update Error in ASP.NET Mvc with related entity

    - by Barry
    I have run into a problem which have searched and tried everything i can to find a solution but to no avail. I am using the same repository and context throughout the process I have a booking entity and a userExtension Entity Below is my image i then get my form collection back from my page and create a new booking public ActionResult Create(FormCollection collection) { Booking toBooking = new Booking(); i then do some validation and property assignment and find an associated BidInstance toBooking.BidInstance = bid; i have checked and the bid is not null. finally i get the user extension file from the Current IPRINCIPAL USER as below UserExtension loggedInUser = m_BookingRepository.GetBookingCurrentUser(User); toBooking.UserExtension = loggedInUser; The Code to do the getUserExtension is : public UserExtension GetBookingCurrentUser(IPrincipal currentUser) { var user = (from u in Context.aspnet_Users .Include("UserExtension") where u.UserName == currentUser.Identity.Name select u).FirstOrDefault(); if (user != null) { var userextension = (from u in Context.UserExtension.Include("aspnet_Users") where u.aspnet_Users.UserId == user.UserId select u).FirstOrDefault(); return userextension; } else{ return null; } } It returns the userextension fine and assigns it fine. i originally used the aspnet_users but encountered this problem so tried to change it to the extension entity. as soon as i call the : Context.AddToBooking(booking); Context.SaveChanges(); i get the following exception and im completely baffled by how to fix it Entities in 'FutureFlyersEntityModel.Booking' participate in the 'FK_Booking_UserExtension' relationship. 0 related 'UserExtension' were found. 1 'UserExtension' is expected. then the final error that comes to the front end is: Metadata information for the relationship 'FutureFlyersModel.FK_Booking_BidInstance' could not be retrieved. Make sure that the EdmRelationshipAttribute for the relationship has been defined in the assembly. Parameter name: relationshipName.. But both the related entities are set in the booking entity passed thruogh PLEASE HELP Im at wits end with this

    Read the article

  • TDD vs. Unit testing

    - by Walter
    My company is fairly new to unit testing our code. I've been reading about TDD and unit testing for some time and am convinced of their value. I've attempted to convince our team that TDD is worth the effort of learning and changing our mindsets on how we program but it is a struggle. Which brings me to my question(s). There are many in the TDD community who are very religious about writing the test and then the code (and I'm with them), but for a team that is struggling with TDD does a compromise still bring added benefits? I can probably succeed in getting the team to write unit tests once the code is written (perhaps as a requirement for checking in code) and my assumption is that there is still value in writing those unit tests. What's the best way to bring a struggling team into TDD? And failing that is it still worth writing unit tests even if it is after the code is written? EDIT What I've taken away from this is that it is important for us to start unit testing, somewhere in the coding process. For those in the team who pickup the concept, start to move more towards TDD and testing first. Thanks for everyone's input. FOLLOW UP We recently started a new small project and a small portion of the team used TDD, the rest wrote unit tests after the code. After we wrapped up the coding portion of the project, those writing unit tests after the code were surprised to see the TDD coders already done and with more solid code. It was a good way to win over the skeptics. We still have a lot of growing pains ahead, but the battle of wills appears to be over. Thanks for everyone who offered advice!

    Read the article

  • How do I develop browser plugins with cross-platform and cross-browser compatibility in mind?

    - by Schnapple
    My company currently has a product which relies on a custom, in-house ActiveX control. The technology it employs (TWAIN) is itself cross-platform by design, but our solution is obviously limited to Internet Explorer on Windows. Long term we would like to become cross-browser and cross-platform (i.e., support other browsers on Windows, support the Macintosh or Linux). Obviously if we wanted to support Firefox on Windows I would need to write a plugin for it. But if we wanted to support the Macintosh, how do I attack that? Is it possible to compile a version of the Firefox plugin that runs on the Mac? Would I be remiss to not also support Safari on the Mac? Are there any plugins which are cross-browser on a platform? (i.e., can any browsers run plugins for other browsers) Since TWAIN is so low-level to the operating system, I do not think Java would be a solution in any capacity, but I could be wrong. What do people generally do when they want to support multiple platforms with a process that will need to be cross-platform and cross-browser compatible?

    Read the article

  • ASP.NET Applications Requests/Sec suddenly jumps to a value of about 70 million/sec. on 8 core web

    - by Subhrajit Roy
    We are doing performance testing of an ASP.NET web application with VSTS 2008. We start with 2000 users and slowly ramp up to 5000 users (reaches this user load at around 2.5 hours after the tests start, after this we stay at this user load). The total test duration is of about 6 hours During these runs we have found that the counter Requests/Sec (under category ASP.NET applications) suddenly spikes to a values of 36-72 millions !!!. This keeps on happening intermittently i.e we see this issue once in every 3 performance runs that we give on the same application. In our testing environment we have 4 web servers and interestingly enough we have found that this issue occurs only in the 8 core web servers. Summarizing ... Issue : The counter Requests/Sec (under category ASP.NET Applications) suddenly jumps to a value of about 70 million/sec. on 8 core web servers. This results in an increase in SQL server connections opened by the application. Response time goes for a toss. Error rates also show similar behaviour. However the counter ISAPI Extention Requests/sec does not show any abnormal increase. The graph of this counter almost overlaps with that of counter Requests/Sec till the time of the appearance of the spike.When the spike appears , this counter (ISAPI Extention Requests/sec) actually shows a drop. Test Settings : Performance test run with Visual Studio Team System 2008. Soak test run for 6 hours. Maximum user load 5000 users. This is load is attained at about 2.5 hours into the run and mainted for remaining duration.(i.e for around 3.5 more hrs) This issue is reproducible though happens intermittently. (i.e occurs one in three or four runs) Test Environment : Web site deployed on 4 Web Servers (Windows Server 2003). Of these 2 are 4 core machines and the remaining 2 are 8 core ones. .NET Framework 3.5 SP1 installed on all 4 web servers. Application hosted on IIS 6.0 run in Worker process isolation mode.

    Read the article

  • Why does my Delphi program's memory continue to grow?

    - by lkessler
    I am using Delphi 2009 which has the FastMM4 memory manager built into it. My program reads in and processes a large dataset. All memory is freed correctly whenever I clear the dataset or exit the program. It has no memory leaks at all. Using the CurrentMemoryUsage routine given in spenwarr's answer to: http://stackoverflow.com/questions/437683/how-to-get-the-memory-used-by-a-delphi-program, I have displayed the memory used by FastMM4 during processing. What seems to be happening is that memory is use is growing after every process and release cycle. e.g.: 1,456 KB used after starting my program with no dataset. 218,455 KB used after loading a large dataset. 71,994 KB after clearing the dataset completely. If I exit at this point (or any point in my example), no memory leaks are reported. 271,905 KB used after loading the same dataset again. 125,443 KB after clearing the dataset completely. 325,519 KB used after loading the same dataset again. 179,059 KB after clearing the dataset completely. 378,752 KB used after loading the same dataset again. It seems that my program's memory use is growing by about 53,400 KB upon each load/clear cycle. Task Manager confirms that this is actually happening. I have heard that FastMM4 does not always release all of the program's memory back to the Operating system when objects are freed so that it can keep some memory around when it needs more. But this continual growing bothers me. Since no memory leaks are reported, I can't identify a problem. Does anyone know why this is happening, if it is bad, and if there is anything I can or should do about it?

    Read the article

  • How to make NAnt send an email using a real account

    - by Turro
    First of all, I have already seen this post: nant mail issues but the only answer is not satisfactory (i.e.: doesn't work for me). I am using NAnt to get latest version of source, upgrade version of the libraries and application, build the application, build the setups... all the usual things, I bet. I would like NAnt to send an email to some people confirming the conclusion of the build process; I've already checked the official (pretty ugly, IMHO) documentation for the task, but the example, once copied and customized, doesn't work. This are the NAnt target and task I'm using: <target name="sendMail" > <mail from="[email protected]" tolist="[email protected];[email protected]" subject="Subject of email" mailhost="smtp.gmail.com" message="Your new release is ready!"> </mail> </target> The error message I get is: 530 5.7.0 Must issue a STARTTLS command first. It looks like that the task was designed for use by an account whose provider doesn't need authentication; but what can I do if I must use an external smtp server which requires authentication (telling my boss I need an smtp server in house is not an option)? Can anybody help/teach me? Thanks in advance...

    Read the article

  • This property cannot be set after writing has started! on a C# WebRequest Object

    - by EBAGHAKI
    I want to reuse a WebRequest object so that cookies and session would be saved for later request to the server. Below is my code. If i use Post function twice on the second time at request.ContentLength = byteArray.Length; it will throw an exception This property cannot be set after writing has started! But as you can see dataStream.Close(); Should close the writing process! Anybody knows what's going on? static WebRequest request; public MainForm() { request = WebRequest.Create("http://localhost/admin/admin.php"); } static string Post(string url, string data) { request.Method = "POST"; byte[] byteArray = Encoding.UTF8.GetBytes(data); request.ContentType = "application/x-www-form-urlencoded"; request.ContentLength = byteArray.Length; Stream dataStream = request.GetRequestStream(); dataStream.Write(byteArray, 0, byteArray.Length); dataStream.Close(); WebResponse response = request.GetResponse(); Console.WriteLine(((HttpWebResponse)response).StatusDescription); dataStream = response.GetResponseStream(); StreamReader reader = new StreamReader(dataStream); string responseFromServer = reader.ReadToEnd(); Console.WriteLine(responseFromServer); reader.Close(); dataStream.Close(); response.Close(); request.Abort(); return responseFromServer; }

    Read the article

  • Asynchronous daemon processing / ORM interaction with Django

    - by perrierism
    I'm looking for a way to do asynchronous data processing with a daemon that uses Django ORM. However, the ORM isn't thread-safe; it's not thread-safe to try to retrieve / modify django objects from within threads. So I'm wondering what the correct way to achieve asynchrony is? Basically what I need to accomplish is taking a list of users in the db, querying a third party api and then making updates to user-profile rows for those users. As a daemon or background process. Doing this in series per user is easy, but it takes too long to be at all scalable. If the daemon is retrieving and updating the users through the ORM, how do I achieve processing 10-20 users at a time? I would use a standard threading / queue system for this but you can't thread interactions like models.User.objects.get(id=foo) ... Django itself is an asynchronous processing system which makes asynchronous ORM calls(?) for each request, so there should be a way to do it? I haven't found anything in the documentation so far. Cheers

    Read the article

  • Automatically stashing

    - by Readonly
    The section Last links in the chain: Stashing and the reflog in http://ftp.newartisans.com/pub/git.from.bottom.up.pdf recommends stashing often to take snapshots of your work in progress. The author goes as far as recommending that you can use a cron job to stash your work regularly, without having to do a stash manually. The beauty of stash is that it lets you apply unobtrusive version control to your working process itself: namely, the various stages of your working tree from day to day. You can even use stash on a regular basis if you like, with something like the following snapshot script: $ cat <<EOF > /usr/local/bin/git-snapshot #!/bin/sh git stash && git stash apply EOF $ chmod +x $_ $ git snapshot There’s no reason you couldn’t run this from a cron job every hour, along with running the reflog expire command every week or month. The problem with this approach is: If there are no changes to your working copy, the "git stash apply" will cause your last stash to be applied over your working copy. There could be race conditions between when the cron job executes and the user working on the working copy. For example, "git stash" runs, then the user opens the file, then the script's "git stash apply" is executed. Does anybody have suggestions for making this automatic stashing work more reliably?

    Read the article

  • make local only daemon listening on different interface (using iptables port forwarding)?

    - by UniIsland
    i have a daemon program which listens on 127.0.0.1:8000. i need to access it when i connect to my box with vpn. so i want it to listen on the ppp0 interface too. i've tried the "ssh -L" method. it works, but i don't think it's the right way to do that, having an extra ssh process running in the background. i tried the "netcat" method. it exits when the connection is closed. so not a valid way for "listening". i also tried several iptables rules. none of them worked. i'm not listing here all the rules i've used. iptables -A FORWARD -j ACCEPT iptables -t nat -A PREROUTING -i ppp+ -p tcp --dport 8000 -j DNAT --to-destination 127.0.0.1:8000 the above ruleset doesn't work. i have net.ipv4.ip_forward set to 1. anyone knows how to redirect traffic from ppp interface to lo? say, listen on "192.168.45.1:8000 (ppp0)" as well as "127.0.0.1:8000 (lo)" there's no need to alter the port. thanx

    Read the article

  • Google Analytics API Authentication Speedup

    - by Paulo
    I'm using a Google Analytics API Class in PHP made by Doug Tan to retrieve Analytics data from a specific profile. Check the url here: http://code.google.com/intl/nl/apis/analytics/docs/gdata/gdataArticlesCode.html When you create a new instance of the class you can add the profile id, your google account + password, a daterange and whatever dimensions and metrics you want to pick up from analytics. For example i want to see how many people visited my website from different country's in 2009. //make a new instance from the class $ga = new GoogleAnalytics($email,$password); //website profile example id $ga->setProfile('ga:4329539'); //date range $ga->setDateRange('2010-02-01','2010-03-08'); //array to receive data from metrics and dimensions $array = $ga->getReport( array('dimensions'=>('ga:country'), 'metrics'=>('ga:visits'), 'sort'=>'-ga:visits' ) ); Now you know how this API class works, i'd like to adress my problem. Speed. It takes alot of time to retrieve multiple types of data from the analytics database, especially if you're building different arrays with different metrics/dimensions. How can i speed up this process? Is it possible to store all the possible data in a cache so i am able to retrieve the data without loading it over and over again?

    Read the article

  • Sync Vs. Async Sockets Performance in .NET

    - by Michael Covelli
    Everything that I read about sockets in .NET says that the asynchronous pattern gives better performance (especially with the new SocketAsyncEventArgs which saves on the allocation). I think this makes sense if we're talking about a server with many client connections where its not possible to allocate one thread per connection. Then I can see the advantage of using the ThreadPool threads and getting async callbacks on them. But in my app, I'm the client and I just need to listen to one server sending market tick data over one tcp connection. Right now, I create a single thread, set the priority to Highest, and call Socket.Receive() with it. My thread blocks on this call and wakes up once new data arrives. If I were to switch this to an async pattern so that I get a callback when there's new data, I see two issues The threadpool threads will have default priority so it seems they will be strictly worse than my own thread which has Highest priority. I'll still have to send everything through a single thread at some point. Say that I get N callbacks at almost the same time on N different threadpool threads notifying me that there's new data. The N byte arrays that they deliver can't be processed on the threadpool threads because there's no guarantee that they represent N unique market data messages because TCP is stream based. I'll have to lock and put the bytes into an array anyway and signal some other thread that can process what's in the array. So I'm not sure what having N threadpool threads is buying me. Am I thinking about this wrong? Is there a reason to use the Async patter in my specific case of one client connected to one server?

    Read the article

  • what's the correct way to release a new website?

    - by kk
    so i've been working on a website on and off for about a year now, and i'm finally at a point where it's functional enough to test out in a sort of private beta (not ready for live release). but i never thought about the correct process for doing this and what things i need to take care of. i've never released a public website before. some of the questions/concerns i have in mind: 1) is it against my MSDN license agreement to release a website using the software? 2) how do i protect my "idea"? is it a bad idea to find random people you don't know to test out your site? can you make them digitally sign some sort of NDA? 3) i'm using some open source code - any proper way to release open source code to live production? 4) how much traffic can a place like discountasp.net handle anyway? can hosting sites generally handle large volume of traffic? any comments/suggestions regarding the proper/safe way to release a public website would be appreciated. i've been working on this for a while and never actually sat down to think about the right way to move from a personal side project to a live production website.

    Read the article

  • Windows Service Conundrum

    - by Paul Johnson
    All, I have a Custom object which I have written using VB.NET (.net 2.0). The object instantiates its own threading.timer object and carries out a number of background process including periodic interrogation of an oracle database and delivery of emails via smtp according to data detected in the database. The following is the code implemented in the windows service class Public Class IncidentManagerService 'Fakes Private _fakeRepoFactory As IRepoFactory Private _incidentRepo As FakeIncidentRepo Private _incidentDefinitionRepo As FakeIncidentDefinitionRepo Private _incManager As IncidentManager.Session 'Real Private _started As Boolean = False Private _repoFactory As New NHibernateRepoFactory Private _psalertsEventRepo As IPsalertsEventRepo = _repoFactory.GetPsalertsEventRepo() Protected Overrides Sub OnStart(ByVal args() As String) ' Add code here to start your service. This method should set things ' in motion so your service can do its work. If Not _started Then Startup() _started = True End If End Sub Protected Overrides Sub OnStop() 'Tear down class variables in order to ensure the service stops cleanly _incManager.Dispose() _incidentDefinitionRepo = Nothing _incidentRepo = Nothing _fakeRepoFactory = Nothing _repoFactory = Nothing End Sub Private Sub Startup() Dim incidents As IList(Of Incident) = Nothing Dim incidentFactory As New IncidentFactory incidents = IncidentFactory.GetTwoFakeIncidents _repoFactory = New NHibernateRepoFactory _fakeRepoFactory = New FakeRepoFactory(incidents) _incidentRepo = _fakeRepoFactory.GetIncidentRepo _incidentDefinitionRepo = _fakeRepoFactory.GetIncidentDefinitionRepo 'Start an incident manager session _incManager = New IncidentManager.Session(_incidentRepo, _incidentDefinitionRepo, _psalertsEventRepo) _incManager.Start() End Sub End Class After a little bit of experimentation I arrived at the above code in the OnStart method. All functionality passed testing when deployed from VS2005 on my development PC, however when deployed on a true target machine, the service would not start and responds with the following message: "The service on local computer started and then stopped..." Am I going about this the correct way? If not how can I best implement my incident manager within the confines of the Windows Service class. It seems pointless to implement a timer for the incidentmanager because this already implements its own timer... Any assistance much appreciated. Kind Regards Paul J.

    Read the article

  • How do I create a .NET Web Service that Posts items to a users Facebook Wall?

    - by Jourdan
    I'm currently toying around with the Clarity .NET Facebook API but am finding certain situations with authentication to be kind of limiting. I keep going through the tutorials but always end up hitting a brick wall with what I want to do. Perhaps I just cannot do it? I want to make a Web Service that takes in the require credentials (APIKey, SecretKey, UsersId (or Session Key?) and whatever else I would need), and then do various tasks: Post to users wall, add events etc. The problem I am having is this: The current documentation, examples and support provide a way to do this within the context of a Web site. Within this context, the required "connect" popup can be initiated and allow the user to authenticate and and connect the application. From that point on the Web can go on with its business to do what it needs to do. If I close the browser and come back to the page, I have to push the connect button again. Except this time, since I was already logged into facebook, I don't have to go through the whole connection process. But still ... How do applications like Tweetdeck get around this? They seemingly have you connect once, when you install their application, and you don't have to do it again. I would assume that this same idea would have to applied towards making a web service because: You don't know what context the user is in when making the Web service call. The web service methods being called could be coming from a Windows Form app, or code behind in a workflow.

    Read the article

  • Automate the signature of the update.rdf manifest for my firefox extension

    - by streetpc
    Hello, I'm developing a firefox extension and I'd like to provide automatic update to my beta-testers (who are not tech-savvy). Unfortunately, the update server doesn't provide HTTPS. According to the Extension Developer Guide on signing updates, I have to sign my update.rdf and provide an encoded public key in the install.rdf. There is the McCoy tool to do all of this, but it is an interactive GUI tool and I'd like to automate the extension packaging using an Ant script (as this is part of a much bigger process). I can't find a more precise description of what's happening to sign the update.rdf manifest than below, and McCoy source is an awful lot of javascript. The doc says: The add-on author creates a public/private RSA cryptographic key pair. The public part of the key is DER encoded and then base 64 encoded and added to the add-on's install.rdf as an updateKey entry. (...) Roughly speaking the update information is converted to a string, then hashed using a sha512 hashing algorithm and this hash is signed using the private key. The resultant data is DER encoded then base 64 encoded for inclusion in the update.rdf as an signature entry. I don't know well about DER encoding, but it seems like it needs some parameters. So would anyone know either the full algortihm to sign the update.rdf and install.rdf using a predefined keypair, or a scriptable alternative to McCoy whether a command-line tool like asn1coding will suffise a good/simple developer tutorial on DER encoding

    Read the article

  • How to preprocess text to do OCR error correction

    - by eaglefarm
    Here is what I'm trying to accomplish: I need to get a several large text files from a computer that is not networked and has no other output except a printer. I tried printing the text, then scanning the printout with OCR to recover the text on another computer but the OCR gets lots of errors (1 vs l, o vs 0, O vs D, etc). To solve this I am thinking of writing a program to process (annotate?) the text file, before printing it, so that the errors can be corrected from the text output of the OCR program. For example, for 1 (number one) vs l (letter L), I could change the text like this: sample inserting \nnn after characters that are frequently wrong in the OCR results: sampl\108e Then I can write another program to examine the file, looking for \nnn and check the character before the \nnn (where nnn is the ascii code in decimal) and fix it if necessary. Of course the program will have to recognize that the \nnn may have errors too but at least it knows that the nnn are digits and can easily correct them. I think I would add a CRC on each line so that any line that isn't corrected perfectly can be flagged as having a problem. Has anyone done anything like this? If there is an existing way of doing this I'd rather not reinvent the wheel. Or any suggestions for annotation format that would help solve this problem would be helpful too.

    Read the article

  • How should bug tracking and help tickets integrate?

    - by Max Schmeling
    I have a little experience with bug tracking systems such as FogBugz where help tickets are issues are (or can be) bugs, and I have some experience using a bug tracking system internally completely separate from a help center system. My question is, in a company with an existing (home-grown) help center system where replacing it is not an option, how should a bug tracking system (probably Mantis) be integrated into the process? Right now help tickets get put in for issues, questions, etc and they get assigned to the appropriate person (PC Tech, Help Desk staff, or if it's an application issue they can't solve in the help desk it gets assigned to a developer). A user can put a request for small modifications or fixes to an application in a help ticket and the developer it gets assigned to will make the change at some point, apply their time to that ticket, and then close the ticket when it goes to production. We don't currently have a bug tracking system, so I'm looking into the best way to integrate one. Should we just take the help tickets and put it into the bug tracking system if it's a bug (or issue or feature request) and then close the ticket if it's not an emergency fix? We probably don't want to expose the bug tracking system to anyone else as they wouldn't know what to put in the help center system and what to put in the bug tracker... right? Any thoughts? Suggestions? Tips? Advice? To-dos? Not to-dos? etc...

    Read the article

  • Safari Extension Questions

    - by Rob Wilkerson
    I'm in the process of building my first Safari extension--a very simple one--but I've run into a couple of problems. The extension boils down to a single, injected script that attempts to bypass the native feed handler and redirect to an http:// URI. My issues so far are twofold: The "whitelist" isn't working the way I'd expect. Since all feeds are shown under the "feed://" protocol, I've tried to capture that in the whitelist as "feed://*/*" (with nothing in the blacklist), but I end up in a request loop that I can't understand. If I set blacklist values of "http://*/*" and "https://*/*", everything works as expected. I can't figure out how to access my settings from my injected script. The script creates a beforeload event handler, but can't access my settings using the safari.extension.settings path indicated in the documentation. I haven't found anything in Apple's documentation to indicate that settings shouldn't be available from my script. Since extensions are such a new feature, even Google returns limited relevant results and most of those are from the official documentation. What am I missing? Thanks.

    Read the article

  • Can this method to convert a name to proper case be improved?

    - by Kelsey
    I am writing a basic function to convert millions of names (one time batch process) from their current form, which is all upper case, to a proper mixed case. I came up with the following so far: public string ConvertToProperNameCase(string input) { TextInfo textInfo = new CultureInfo("en-US", false).TextInfo; char[] chars = textInfo.ToTitleCase(input.ToLower()).ToCharArray(); for (int i = 0; i + 1 < chars.Length; i++) { if ((chars[i].Equals('\'')) || (chars[i].Equals('-'))) { chars[i + 1] = Char.ToUpper(chars[i + 1]); } } return new string(chars);; } It works in most cases such as: JOHN SMITH - John Smith SMITH, JOHN T - Smith, John T JOHN O'BRIAN - John O'Brian JOHN DOE-SMITH - John Doe-Smith There are some edge cases that do no work like: JASON MCDONALD - Jason Mcdonald (Correct: Jason McDonald) OSCAR DE LA HOYA - Oscar De La Hoya (Correct: Oscar de la Hoya) MARIE DIFRANCO - Marie Difranco (Correct: Marie DiFranco) These are not captured and I am not sure if I can handle all these odd edge cases. Can anyone think of anything I could change or add to capture more edge case? I am sure there are tons of edge cases I am not even thinking of as well. All casing should following North American conventions too meaning that if certain countries expect a specific capitalization format, and that differs from the North American format, then the North American format takes precedence.

    Read the article

< Previous Page | 726 727 728 729 730 731 732 733 734 735 736 737  | Next Page >