Search Results

Search found 14961 results on 599 pages for 'tab complete'.

Page 483/599 | < Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >

  • Best way to fork SVN project with Git

    - by Jeremy Thomerson
    I have forked an SVN project using Git because I needed to add features that they didn't want. But at the same time, I wanted to be able to continue pulling in features or fixes that they added to the upstream version down into my fork (where they don't conflict). So, I have my Git project with the following branches: master - the branch I actually build and deploy from feature_* - feature branches where I work or have worked on new things, which I then merge to master when complete vendor-svn - my local-only git-svn branch that allows me to "git svn rebase" from their svn repo vendor - my local branch that i merge vendor-svn into. then i push this (vendor) branch to the public git repo (github) So, my flow is something like this: git checkout vendor-svn git svn rebase git checkout vendor git merge vendor-svn git push origin vendor Now, the question comes here: I need to review each commit that they made (preferably individually since at this point I'm about twenty commits behind them) before merging them into master. I know that I could run git checkout master; git merge vendor, but this would pull in all changes and commit them, without me being able to see if they conflict with what I need. So, what's the best way to do this? Git seems like a great tool for handling forks of projects since you can pull and push from multiple repos - I'm just not experienced with it enough to know the best way of doing this. Here's the original SVN project I'm talking about: https://appkonference.svn.sourceforge.net/svnroot/appkonference My fork is at github.com/jthomerson/AsteriskAudioKonf (sorry - I couldn't make it a link since I'm a new user here)

    Read the article

  • Need help in resolving a compiler error: error: invalid conversion from ‘int’ to ‘GIOCondition’

    - by michael
    I have a simple cpp file which uses GIO. I have stripped out everything to show my compile error: Here is the error I get: My.cpp:16: error: invalid conversion from ‘int’ to ‘GIOCondition’ make[2]: *** [My.o] Error 1 Here is the complete file: #include <glib.h> static gboolean read_socket (GIOChannel *gio, GIOCondition condition, gpointer data) { return false; } void createGIOChannel() { GIOChannel* gioChannel = g_io_channel_unix_new(0); // the following is the line causing the error: g_io_add_watch(gioChannel, G_IO_IN|G_IO_HUP, read_socket, NULL); } I have seen other example using gio, and I am doing the same thing in term of calling G_IO_IN|G_IO_HUP. And the documentation http://www.gtk.org/api/2.6/glib/glib-IO-Channels.html, said I only need to include , which I did. Can you please tell me how to resolve my error? One thing I can think of is I am doing this in a cpp file. But g_io_add_watch is a c function? Thank you for any help. I have spent hours on this but did not get anywhere.

    Read the article

  • Eclipse plugin installation/update issues

    - by The Elite Gentleman
    I've installed the following Team repository plugins (along with it's dependencies) for Eclipse Helios (using Eclipse updater). MercurialEclipse 1.7.1 Subclipse 1.6.17 Subversive SVN All of these are the latest in Eclipse Marketplace. My problem is when I go to Eclipse "Preferences", under "Team" I only see CVS but under Eclipse Marketplace, I can see that these plugins are installed (it gives me an option to uninstall it). How do I configure my Team repositories to reflect under "Team" in Preferences? Also, there is an update for "Eclipse IDE for Java EE developers, but when I try to update it, the following error occurs: Cannot complete the install because of a conflicting dependency. Software being installed: Eclipse IDE for Java EE Developers 1.3.2.20110301-1807 (epp.package.jee 1.3.2.20110301-1807) Software currently installed: Shared profile 1.0.0.1276787175574 (SharedProfile_epp.package.jee 1.0.0.1276787175574) Only one of the following can be installed at once: toolingepp.package.jee.configuration 1.3.2.20110301-1807 toolingepp.package.jee.configuration 1.3.0.20100617-0521 Cannot satisfy dependency: From: Shared profile 1.0.0.1276787175574 (SharedProfile_epp.package.jee 1.0.0.1276787175574) To: toolingepp.package.jee.configuration [1.3.0.20100617-0521] Cannot satisfy dependency: From: Eclipse IDE for Java EE Developers 1.3.2.20110301-1807 (epp.package.jee 1.3.2.20110301-1807) To: toolingepp.package.jee.configuration [1.3.2.20110301-1807] How do I solve it? Yes, I've spent days Googling for this issue but none solved my problem. Thanks in advance.

    Read the article

  • Temporary debug releases and final application releases

    - by baron
    I have a quick question regarding debug and release in VS 2008. I have an app i've been working on - its not yet complete but the bulk of the functionality is there. So basically i'm trying to give a copy of it now to the person helping with documentation - just so they can have a play and get the feel for what i've made. Now the question is how to provide it to them. I was told to just copy the .exe out of the debug/bin folder and put that onto USB. But when testing, if I run this .exe anywhere else (outside of this folder) it crashes. I've now worked out why this is: var path = ConfigurationManager.AppSettings["PathToUse"]; var files = Directory.GetFiles(path); throws a null reference, so that App.config file is not being used. If I copy that file in with the .exe it works again. So actually my question is regarding the best way to manage this situation. What is the best way to provide a working copy to people, and, is there a reference on preparing apps for release - so everything is packaged together and installed in a clean structured folder heirarchy?

    Read the article

  • Google Code Jam 2010 Large DataSets Take Too Long to Submit

    - by Travis
    Hey Guys, I'm participating in the 2010 code jam and I solved two of the problems for the small data sets, but I'm not even close to solving the large data sets in the 8 minute time frame. I'm wondering if anyone out there has solved the large data set: What hardware were you running on? What language were you running on? What performance tuning techniques did you do on your code to run as fast as possible? I'm writing the solutions in Ruby, which is not my day to day language, and executing them on my Macbook Pro. My solutions for problem A and problem C are on github at http://github.com/tjboudreaux/codejam2010. I'd appreciate any suggestions that you may have. FWIW, I have alot of experience in C++ from college, my primary language is PHP, and my "sandbox" language is Ruby. Was I just a bit ambitious by taking a shot at this in Ruby, not knowing where the language struggles for performance, or does anyone see anything that's a redflag as to why I can't complete the large dataset in time to submit.

    Read the article

  • Am I correctly extracting JPEG binary data from this mysqldump?

    - by Glenn
    I have a very old .sql backup of a vbulletin site that I ran around 8 years ago. I am trying to see the file attachments that are stored in the DB. The script below extracts them all and is verified to be JPEG by hex dumping and checking the SOI (start of image) and EOI (end of image) bytes (FFD8 and FFD9, respectively) according to the JPEG wiki page. But when I try to open them with evince, I get this message "Error interpreting JPEG image file (JPEG datastream contains no image)" What could be going on here? Some background info: sqldump is around 8 years old vbulletin 2.x was the software that stored the info most likely php 4 was used most likely mysql 4.0, possibly even 3.x the column datatype these attachments are stored in is mediumtext My Python 3.1 script: #!/usr/bin/env python3.1 import re trim_l = re.compile(b"""^INSERT INTO attachment VALUES\('\d+', '\d+', '\d+', '(.+)""") trim_r = re.compile(b"""(.+)', '\d+', '\d+'\);$""") extractor = re.compile(b"""^(.*(?:\.jpe?g|\.gif|\.bmp))', '(.+)$""") with open('attachments.sql', 'rb') as fh: for line in fh: data = trim_l.findall(line)[0] data = trim_r.findall(data)[0] data = extractor.findall(data) if data: name, data = data[0] try: filename = 'files/%s' % str(name, 'UTF-8') ah = open(filename, 'wb') ah.write(data) except UnicodeDecodeError: continue finally: ah.close() fh.close() update The JPEG wiki page says FF bytes are section markers, with the next byte indicating the section type. I see some that are not listed in the wiki page (specifically, I see a lot of 5C bytes, so FF5C). But the list is of "common markers" so I'm trying to find a more complete list. Any guidance here would also be appreciated.

    Read the article

  • Why is my logic not working correctly for SPOJ TOPOSORT?

    - by Kavish Dwivedi
    The given problem is http://www.spoj.com/problems/TOPOSORT/ The output format is particularly important as : Print "Sandro fails." if Sandro cannot complete all his duties on the list. If there is a solution print the correct ordering, the jobs to be done separated by a whitespace. If there are multiple solutions print the one, whose first number is smallest, if there are still multiple solutions, print the one whose second number is smallest, and so on. What I am doing is simply doing dfs by reversing the edges i.e if job A finishes before job B, there is a directed edge from B to A . I am maintaining the order by sorting the adjacency list I created and storing the nodes which don't have any constraints separately so as to print them later in correct order . There are two flag arrays used , one for marking discovered node and one for marking the node whose all neighbors have been explored. Now my solution is http://www.ideone.com/QCUmKY (the important function is the visit funtion ) and its giving WA after running correct for 10 cases so its really hard to figure out where am I doing it wrong since it runs for all of the test cases which I have done by hand.

    Read the article

  • how can I unload a swf with AS3?

    - by Ole Media
    Any body can help on how I can do to unload a swf before I load the next one? var mLoader:Loader = new Loader(); function loadSWF(e:Event):void { var imageId:Array = e.target.name.split("_"); var targetId:int = imageId[0]; var caption:int = imageId[1]; txt = arrayText[caption]; dynText1.text = ""; dynText1.text = arrayText[caption]; dynText1.setTextFormat(textFormatMain); var swf:String = "swf/" + xmlList[targetId].image[caption].@swf; if (mLoader.content != null) { swfLoad.swfArea.mLoader.unload(); swfLoad.swfArea.mLoader.load(null); } mLoader.contentLoaderInfo.addEventListener(Event.COMPLETE, onCompleteHandler); mLoader.contentLoaderInfo.addEventListener(ProgressEvent.PROGRESS, onProgressHandler); mLoader.load(new URLRequest(swf)); } //-------------------------------------------------------------------------// function onCompleteHandler(loadEvent:Event):void { swfLoad.swfArea.addChild(loadEvent.currentTarget.content); TweenLite.to(swfLoad, 0.5, {alpha:1}); } function onProgressHandler(mProgress:ProgressEvent):void { //var percent:Number = mProgress.bytesLoaded/mProgress.bytesTotal; //trace(percent); }

    Read the article

  • ExecutorService - scaling

    - by Stanciu Alexandru-Marian
    I am trying to write a program in Java using ExecutorService and it's function invokeAll. My question is: does the invokeAll functions solve the tasks simultaneously? I mean, if i have two processors, there will be two workers in the same time? Because a can't make it to scale correct. It takes the same time to complete the problem if i give newFixedThreadPool(2) or 1. List<Future<PartialSolution>> list = new ArrayList<Future<PartialSolution>>(); Collection<Callable<PartialSolution>> tasks = new ArrayList<Callable<PartialSolution>>(); for(PartialSolution ps : wp) { tasks.add(new Map(ps, keyWords)); } list = executor.invokeAll(tasks); Map is a class that implements Callable and wp is a vector of Partial Solutions, a class that holds some informations in different times. Why doesn't it scale? What could be the problem? Thank you, Alex

    Read the article

  • How to estimate size of data to transfer when using DbCommand.ExecuteXXX?

    - by Yadyn
    I want to show the user detailed progress information when performing potentially lengthy database operations. Specifically, when inserting/updating data that may be on the order of hundreds of KB or MB. Currently, I'm using in-memory DataTables and DataRows which are then synced with the database via TableAdapter.Update calls. This works fine and dandy, but the single call leaves little opportunity to glean any kind of progress info to show to the user. I have no idea how much data is passing through the network to the remote DB or its progress. Basically, all I know is when Update returns and it is assumed complete (barring any errors or exceptions). But this means all I can show is 0% and then a pause and then 100%. I can count the number of rows, even going so far to cound how many are actually Modified or Added, and I could even maybe calculate per DataRow its estimated size based on the datatype of each column, using sizeof for value types like int and checking length for things like strings or byte arrays. With that, I could probably determine, before updating, an estimated total transfer size, but I'm still stuck without any progress info once Update is called on the TableAdapter. Am I stuck just using an indeterminate progress bar or mouse waiting cursor? Would I need to radically change our data access layer to be able to hook into this kind of information? Even if I can't get it down to the precise KB transferred (like a web browser file download progress bar), could I at least know when each DataRow/DataTable finishes or something? How do you best show this kind of progress info using ADO.NET?

    Read the article

  • How CPU finds ISR and distinguishes between devices

    - by ripunjay-tripathi-gmail-com
    I should first share all what I know - and that is complete chaos. There are several different questions on the topic, so please don't get irritated :). 1) To find an ISR, CPU is provided with a interrupt number. In x86 machines (286/386 and above) there is a IVT with ISRs in it; each entry of 4 bytes in size. So we need to multiply interrupt number by 4 to find the ISR. So first bunch of questions is - I am completely confused in mechanism of CPU receiving the interrupt. To raise an interrupt, firstly device shall probe for IRQ - then what ? The interrupt number travels "on IRQ" towards CPU? I also read something like device putting ISR address on data bus ; whats that then ? What is the concept of devices overriding the ISR. Can somebody tell me few example devices where CPU polls for interrupts? And where does it finds ISR for them ? 2) If two devices share an IRQ (which is very much possible), how does CPU differs amongst them ? What if both devices raise an interrupt of same priority simultaneously. I got to know there will be masking of same type and low priority interrupts - but how this communication happens between CPU and device controller? I studied the role of PIC and APIC for this problem, but could not understand. Thanks for reading. Thank you very much for answering.

    Read the article

  • WiX: Forcefully launch uninstall previous using CustomAction

    - by leiflundgren
    I'm writing a new major upgrade of our product. In my installer I start by finding configuration settings of the previous version, then I'd like to uninstall the previous version. I have found several guides telling me how one should make a MSI suitable for such upgrades. However, the previous was not an MSI. It was not according to best practices. It does, however, in registry HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall{GUID} specify an UninstallString. Using a RegistrySearch I can easy find the command below, which I store in UNINSTALL_CMD. RunDll32 C:\PROGRA~1\COMMON~1\INSTAL~1\PROFES~1\RunTime\10\01\Intel32\Ctor.dll,LaunchSetup "C:\Program Files\InstallShield Installation Information\{GUID}\setup.exe" -l0x9 -removeonly 4: I cannot get the hang of the CustomAction needed to perform the actual uninstall. <CustomAction Id="ca.UninstPrev" Property="UNINSTALL_CMD" ExeCommand="" /> The MSI logs says: Info 1721. There is a problem with this Windows Installer package. A program required for this install to complete could not be run. Contact your support personnel or package vendor. Action: ca.UninstallPrevious, location: RunDll32 C:\PROGRA~1\COMMON~1\INSTAL~1\PROFES~1\RunTime\10\01\Intel32\Ctor.dll,LaunchSetup "C:\Program Files\InstallShield Installation Information{GUID}\setup.exe" -l0x9 -removeonly, command: Anyone seeing what I am doing wrong here? Regards Leif

    Read the article

  • How do I scrape information off ASP.NET websites when paging and JavaScript links are being used?

    - by Ian Roke
    I have been given a staff list which is supposed to be up to date but it doesn't match an intranet People Finder which is written in ASP.NET. As the information is sensitive I am not able to access the database the People Finder is using so the only way I can get at the information is by scraping the structure starting at the top brass at the top and then going through each tier in turn. Each person has a Staff number which then forms the URL http://intranet/peoplefinder/index.aspx?srn=ABC1234 and then all the people who report to them are listed underneth in the format <a id="gvEmployees_ctl03_lnkFullName" href="index.aspx?srn=ABC4321" target="_self"> where each URL indicates the Staff number and provides a link to their team. The trouble arises when the teams are big as paging is implemented in the GridView with an URL such as <a href="javascript:__doPostBack('gvEmployees','Page$2')">2</a>. How would I scrape this page, capture the SRN and other details along with the people who report to the person on all pages of the GridView then loop through each reportee and do the same process until the whole list is complete?

    Read the article

  • Can someone help me refactor this C# linq business logic for efficiency?

    - by Russell
    I feel like this is not a very efficient way of using linq. I was hoping somebody on here would have a suggestion for a refactor. I realize this code is not very pretty, as I was in a complete rush. public class Workflow { public void AssignForms() { using (var cntx = new ProjectBusiness.Providers.ProjectDataContext()) { var emplist = (from e in cntx.vw_EmployeeTaskLists where e.OwnerEmployeeID == null select e).ToList(); foreach (var emp in emplist) { // if employee has a form assigned: break; if (emp.GRADE > 15 || (emp.Pay_Plan.ToLower().Contains("al") || emp.Pay_Plan.ToLower().Contains("ex"))) { //Assign278(); } else if ((emp.Series.Contains("0905") || emp.Series.Contains("0511") || emp.Series.Contains("0110") || emp.Series.Contains("1801")) || (emp.GRADE >= 12 && emp.GRADE <= 15)) { var emptask = new ProjectBusiness.Providers.EmployeeTask(); emptask.TimespanID = cntx.Timespans.SingleOrDefault(t => t.BeginDate.Year == DateTime.Today.Year & t.EndDate.Year == DateTime.Today.Year).TimespanID; var FormID = (from f in cntx.Forms where f.FormName.Contains("450") select f.FormID).FirstOrDefault(); var TaskStatusID = (from s in cntx.TaskStatus where s.StatusDescription.ToLower() == "not started" select s.TaskStatusID).FirstOrDefault(); Assign450((int)emp.EmployeeID, FormID, TaskStatusID, emptask); cntx.EmployeeTasks.InsertOnSubmit(emptask); } else { //Assign185(); } } cntx.SubmitChanges(); } } private void Assign450(int EmployeeID, int FormID, int TaskStatusID, ProjectBusiness.Providers.EmployeeTask emptask) { emptask.FormID = FormID; emptask.OwnerEmployeeID = EmployeeID; emptask.AssignedToEmployeeID = EmployeeID; emptask.TaskStatusID = TaskStatusID; emptask.DueDate = DateTime.Today; } }

    Read the article

  • how to specify open id realm in openid4java 0.9.5

    - by Salvin Francis
    my url @ development : http://192.168.0.1:8888/com.company.MyEntryPoint/MyEntrypoint.html my url @ live env : http://www.example.com/com.company.MyEntryPoint/MyEntrypoint.html I need users to authenticate using open id, this is how i want my realm to be: *.company.MyEntryPoint I wrote a simple code to specify realm: AuthRequest authReq = manager.authenticate( discovered, returnToUrl, "*.company.MyEntryPoint" ); it does not work. Exception: org.openid4java.message.MessageException: 0x301: Realm verification failed (2) for: *.company.MyEntryPoint at org.openid4java.message.AuthRequest.validate(AuthRequest.java:354) at org.openid4java.message.AuthRequest.createAuthRequest(AuthRequest.java:101) at org.openid4java.consumer.ConsumerManager.authenticate(ConsumerManager.java:1073) Of all the combinations I tried, curiously, the following worked: AuthRequest authReq = manager.authenticate( discovered, returnToUrl, "http://localhost:8888/com.capgent.MyEntryPoint" ); This does not solves my issue but rather complicates it :) According to google and open id spec it should have worked complete code snippet: List discoveries = manager.discover(clientUrl); DiscoveryInformation discovered = manager.associate(discoveries); AuthRequest authReq = manager.authenticate(discovered, returnToUrl,"*.company.MyEntryPoint"); FetchRequest fetch = FetchRequest.createFetchRequest(); fetch.addAttribute("email", "http://schema.openid.net/contact/email", true); fetch.addAttribute("country", "http://axschema.org/contact/country/home", true); fetch.addAttribute("firstname", "http://axschema.org/namePerson/first", true); fetch.addAttribute("lastname", "http://axschema.org/namePerson/last", true); fetch.addAttribute("language", "http://axschema.org/pref/language", true); authReq.addExtension(fetch); String returnStr; if (!discovered.isVersion2()) { returnStr = authReq.getDestinationUrl(true); } else { returnStr = authReq.getDestinationUrl(false); } What am I doing wrong over here ?

    Read the article

  • Why does my Perl CGI script complain about "Can't locate CGI/Simple.pm"?

    - by dexter
    For more information see this Example use strict; use warnings; use CGI::Simple; use DBI; my $cgi = CGI::Simple->new; my $dsn = sprintf( 'DBI:mysql:database=%s;host=%s', 'cdcol', 'localhost' ); my $dbh = DBI->connect($dsn, root => '', { AutoCommit => 0, RaiseError => 0 } ); my $status = $dbh ? 'Connected' : 'Failed to connect'; print $cgi->header, <<HTML; <!DOCTYPE HTML> <html> <head><title>Test</title></head> <body> <h1>Perl CGI Script</h1> <p>$status</p> </body> </html> HTML This code gives me error: The server encountered an internal error and was unable to complete your request. Error message: Can't locate CGI/Simple.pm in @INC (@INC contains: C:/xampp/perl/site/lib/ C:/xampp/perl/lib C:/xampp/perl/site/lib C:/xampp/apache) at C:/xampp/htdocs/perl/index.pl line 4. BEGIN failed--compilation aborted at C:/xampp/htdocs/perl/index.pl line 4. , Error 500 localhost 3/25/2010 11:19:19 AM Apache/2.2.14 (Win32) DAV/2 mod_ssl/2.2.14 OpenSSL/0.9.8l mod_autoindex_color PHP/5.3.1 mod_apreq2-20090110/2.7.1 mod_perl/2.0.4 Perl/v5.10.1 What does this mean and how can I resolve it?

    Read the article

  • Python Socket Getting Connection Reset

    - by Ian
    I created a threaded socket listener that stores newly accepted connections in a queue. The socket threads then read from the queue and respond. For some reason, when doing benchmarking with 'ab' (apache benchmark) using a concurrency of 2 or more, I always get a connection reset before it's able to complete the benchmark (this is taking place locally, so there's no external connection issue). class server: _ip = '' _port = 8888 def __init__(self, ip=None, port=None): if ip is not None: self._ip = ip if port is not None: self._port = port self.server_listener(self._ip, self._port) def now(self): return time.ctime(time.time()) def http_responder(self, conn, addr): httpobj = http_builder() httpobj.header('HTTP/1.1 200 OK') httpobj.header('Content-Type: text/html; charset=UTF-8') httpobj.header('Connection: close') httpobj.body("Everything looks good") data = httpobj.generate() sent = conn.sendall(data) def http_thread(self, id): self.log("THREAD %d: Starting Up..." % id) while True: conn, addr = self.q.get() ip, port = addr self.log("THREAD %d: responding to request: %s:%s - %s" % (id, ip, port, self.now())) self.http_responder(conn, addr) self.q.task_done() conn.close() def server_listener(self, host, port): self.q = Queue.Queue(0) sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.bind( (host, port) ) sock.listen(5) for i in xrange(4): #thread count thread.start_new(self.http_thread, (i+1, )) while True: self.q.put(sock.accept()) sock.close() server('', 9999) When running the benchmark, I get totally random numbers of good requests before it errors out, usually between 4 and 500. Edit: Took me a while to figure it out, but the problem was in sock.listen(5). Because I was using apache benchmark with a higher concurrency (5 and up) it was causing the backlog of connections to pile up, at which point the connections started getting dropped by the socket.

    Read the article

  • software distribution and patch management

    - by daemonkid
    How do software houses like Microsoft or anti-virus companies patch/update their software? Anti virus companies dont send the complete executable; only new virus signatures I suppose. Similarly, Ive noticed microsoft sends certain files to the '$NtUninstallKB......$' folder that it creates when it the windows update program runs. I suppose there is an installer in each such folder there that replaces only those dlls that need to be updated or fixed. Questions Is there a universal method for doing this or does each house employ their own methods? I dont want to re-send the entire application to each individual client. Suppose if only certain dlls need to be changed or maybe some more added, how should I go about planning my final compiled application. Do I need to look at separating my application into multiple assemblies? If yes, then is there some compilation method that is allows to pack specific classes into a particular dll? What I have put down here are my thoughts on the subject and I could be wrong. Could anyone throw some light on this please? I am looking at implementing such a deployment and patch management technique for the .net platform. Thanks for your time.

    Read the article

  • Uploading file with metadata

    - by Dilse Naaz
    Hi Could you help me for how to adding a file to the sharepoint document library? I found some articles in net. but i didn't get the complete concept of the same. Now i uploaded a file without metadata by using this code. if (fuDocument.PostedFile != null) { if (fuDocument.PostedFile.ContentLength > 0) { Stream fileStream = fuDocument.PostedFile.InputStream; byte[] byt = new byte[Convert.ToInt32(fuDocument.PostedFile.ContentLength)]; fileStream.Read(byt, 0, Convert.ToInt32(fuDocument.PostedFile.ContentLength)); fileStream.Close(); using (SPSite site = new SPSite(SPContext.Current.Site.Url)) { using (SPWeb webcollection = site.OpenWeb()) { SPFolder myfolder = webcollection.Folders["My Library"]; webcollection.AllowUnsafeUpdates = true; myfolder.Files.Add(System.IO.Path.GetFileName(fuDocument.PostedFile.FileName), byt); } } } } This code is working as fine. But i need to upload file with meta data. Please help me by editing this code if it possible. I created 3 columns in my Document library..

    Read the article

  • How to best show progress info when using ADO.NET?

    - by Yadyn
    I want to show the user detailed progress information when performing potentially lengthy database operations. Specifically, when inserting/updating data that may be on the order of hundreds of KB or MB. Currently, I'm using in-memory DataTables and DataRows which are then synced with the database via TableAdapter.Update calls. This works fine and dandy, but the single call leaves little opportunity to glean any kind of progress info to show to the user. I have no idea how much data is passing through the network to the remote DB or its progress. Basically, all I know is when Update returns and it is assumed complete (barring any errors or exceptions). But this means all I can show is 0% and then a pause and then 100%. I can count the number of rows, even going so far to cound how many are actually Modified or Added, and I could even maybe calculate per DataRow its estimated size based on the datatype of each column, using sizeof for value types like int and checking length for things like strings or byte arrays. With that, I could probably determine, before updating, an estimated total transfer size, but I'm still stuck without any progress info once Update is called on the TableAdapter. Am I stuck just using an indeterminate progress bar or mouse waiting cursor? Would I need to radically change our data access layer to be able to hook into this kind of information? Even if I can't get it down to the precise KB transferred (like a web browser file download progress bar), could I at least know when each DataRow/DataTable finishes or something? How do you best show this kind of progress info using ADO.NET?

    Read the article

  • How do I change the spacing between fields in a DataForm?

    - by Simon_Weaver
    How do I change the spacing between fields in a DataForm in Silverlight? I've tried editing the template but cannot find what I need. I thought all I needed to do was change the MinHeight and Margin of the DataField style, but that doesn't seem to do it. <Style TargetType="dataFormToolkit:DataField"> <Setter Property="IsTabStop" Value="False"/> <Setter Property="Margin" Value="2"/> <Setter Property="MinHeight" Value="5"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="dataFormToolkit:DataField"> <ContentControl x:Name="ContentControl" Foreground="{TemplateBinding Foreground}" HorizontalContentAlignment="Stretch" IsTabStop="False" VerticalAlignment="Center"/> </ControlTemplate> </Setter.Value> </Setter> </Style> I've found a number of articles about styling DataForm but many of them seem to be out of date. I don't see anything in the complete extracted template in Blend that corresponds to spacing.

    Read the article

  • iPad: Cannot load facebook page wall in UIWebview

    - by geekay
    Its already too long i am struggling with this issue. After searching a lot I decided to post a question here. What my app does Captures photo Uploads the photo on the wall of the page Displays the facebook page wall in a UIWebview after upload is complete Everything was working as expected 4 days back :) Suddenly something went wrong :( Code NSString *facebookPageURL =@"https://m.facebook.com/pages/<myPageName>/<myPageID>?v=wall" UIWebView *webView = [[UIWebView alloc] initWithFrame:kAppFrame]; [webView setUserInteractionEnabled:NO]; [webView setBackgroundColor:[UIColor clearColor]]; webView.delegate = self; [webView setHidden: YES]; NSURL *url = [NSURL URLWithString:[facebookPageURL stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding] relativeToURL:[NSURL URLWithString:@"http://login.facebook.com"]]; NSMutableURLRequest *request = nil; if(url) request = [NSMutableURLRequest requestWithURL:url]; [webView loadRequest:request]; [self.view addSubview:webView]; [webView reload]; [self.view bringSubviewToFront:webView]; webView = nil; Scenario If I open the url facebookPageURL in Safari in iOS Simulator it works well If I open the url in any browser on Mac it works well In webView I see a white screen If I change the facebookPageURL to remove ?v=wall to ?v=info I am stil able to see the page.(not blank screen atleast). Note 1. My facebook Page is NOT unpublished and is visible. 2. I have cross checked the facebook page permissions. I suspect there is something changed on facebook side overnight. Please guide.

    Read the article

  • How do I begin reading source code?

    - by anonnoir
    I understand the value of reading source code, and I am trying my best to read as much as I can. However, every time I try getting into a 'large' (i.e. complete) project of sorts, I am overwhelmed. For example, I use Anki a lot when revising languages. Also, I'm interested in getting to know how an audio player works (because I have some project ideas), hence quodlibet on Google Code. But whenever I open the source code folders for the above programs, there are just so many files that I don't know where or what to begin with. I think that I should start with files marked init.py but I can't see the logical structure of the programs, or what reasoning was applied when the original writer divided his modules the way he did. Hence, my questions: How/where should I begin reading source? Any general tips or ideas? How does a programmer keep in mind the overall structure and logic of the program, especially for large projects, and is it common not to document that structure? As an open source reader, must I look through all of the code and get a bird's eye view of the code and libraries, before even being able to proceed? Would an IDE like Eclipse SDK (with PyDev) help with code-reading? Thanks for the help; I really appreciate your helping me.

    Read the article

  • CFHost DNS Resolution - When is it OK to use synchronous API?

    - by Jasarien
    I went to the iPhone Developer Tech Talk a few months ago and asked one of the gurus there about the lack of NSHost on the iPhone. Some code I was porting to the iPhone made significant use of NSHost throughout its networking code. I was told that NSHost is on the iPhone, but its private. I was also told that NSHost is a synchronous API and that I shouldn't use it anyway. (If anyone could elaborate on why it shouldn't be used, as a bonus, that'd be great.) I can see the caveats of using synchronous API's on the main thread in that they'll block until complete - and that's never a good thing with network code because there are so many factors that could cause the API to block the thread for a significant amount of time. My solution was to write a wrapper around CFHost's asynchronous resolution functions. The result works quite well, and I'm considering releasing it into the public domain. But my question is this: Say my app only resolves a hostname once per run, during the connecting phase, and then cache's it for the rest of the session. During the time it is resolving, a modal screen is shown telling the user "Connecting" with a nice spinner. Does it really matter whether or not the resolution is asynchronous?? The user has to wait to connect anyway, and the resolution is only done on the first connection. Subsequent connections use the cached result of the resolution. When is it OK to be synchronous and when should things be asynchronous?

    Read the article

  • XSLT multiple string replacement with recursion

    - by John Terry
    I have been attempting to perform multiple (different) string replacement with recursion and I have hit a roadblock. I have sucessfully gotten the first replacement to work, but the subsequent replacements never fire. I know this has to do with the recursion and how the with-param string is passed back into the call-template. I see my error and why the next xsl:when never fires, but I just cant seem to figure out exactly how to pass the complete modified string from the first xsl:when to the second xsl:when. Any help is greatly appreciated. <xsl:template name="replace"> <xsl:param name="string" select="." /> <xsl:choose> <xsl:when test="contains($string, '&#13;&#10;')"> <xsl:value-of select="substring-before($string, '&#13;&#10;')" /> <br/> <xsl:call-template name="replace"> <xsl:with-param name="string" select="substring-after($string, '&#13;&#10;')"/> </xsl:call-template> </xsl:when> <xsl:when test="contains($string, 'TXT')"> <xsl:value-of select="substring-before($string, '&#13;TXT')" /> <xsl:call-template name="replace"> <xsl:with-param name="string" select="substring-after($string, '&#13;')" /> </xsl:call-template> </xsl:when> <xsl:otherwise> <xsl:value-of select="$string"/> </xsl:otherwise> </xsl:choose> </xsl:template>

    Read the article

< Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >