Search Results

Search found 7638 results on 306 pages for 'binary tree'.

Page 124/306 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • Check a list of packages to install with apt-get

    - by Joel
    I am writing a post-install script for Ubuntu in Perl (same script as seen here). One of the steps is to install a list of packages. The problem is that if apt-get install fails in some of many different ways for any one of the packages the script dies badly. I would like to prevent that from happening. This happens because of the ways that apt-get install fails for packages that it doesn't like. For example when I try to install a nonsense word (i.e. typed in the wrong package name) $ sudo apt-get install oblihbyvl Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package oblihbyvl but if instead the package name has been obsoleted (installing handbrake from ppa) $ sudo apt-get install handbrake Reading package lists... Done Building dependency tree Reading state information... Done Package handbrake is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'handbrake' has no installation candidate $ apt-cache search handbrake handbrake-cli - versatile DVD ripper and video transcoder - command line handbrake-gtk - versatile DVD ripper and video transcoder - GTK GUI I have tried parsing the results of apt-cache and apt-get -s install to try to catch all possibilities before doing the install, but I seem to keep finding new ways to allow failures to continue to the actual install system command. My question is, is there some facility either in Perl (e.g. a module, though I would like to avoid installing modules if possible as this is supposed to be the first thing run after a new install of Ubuntu) or apt-* or dpkg that would let me be sure that the packages are all available to be installed before installing and if not fail gracefully in some way that lets the user decide what to do?

    Read the article

  • SQL Server: String Manipulation, Unpivoting

    - by OMG Ponies
    I have a column called body, which contains body content for our CMS. The data looks like: ...{cloak:id=1.1.1}...{cloak}...{cloak:id=1.1.2}...{cloak}...{cloak:id=1.1.3}...{cloak}... A moderately tweaked for readability example: ## h5. A formal process for approving and testing all external network connections and changes to the firewall and router configurations? {toggle-cloak:id=1.1.1}{tree-plus-icon} *Compliance:* {color:red}{*}Partial{*}{color} (?) {cloak:id=1.1.1} || Date: | 2010-03-15 || || Owner: | Brian || || Researched by: | || || Narrative: | Jira tickets are normally used to approve and track network changes\\ || || Artifacts: | Jira.bccampus.ca\\ || || Recommendation: | Need to update policy that no Jira = no change\\ || || Proposed Remedy(ies): | || || Approved Remedy(ies): | || || Date: | || || Reviewed by: | || || Remarks/comments: | || {cloak}## h5. Current network diagrams with all connections to cardholder data, including any wireless networks? {toggle-cloak:id=1.1.2}{tree-plus-icon} *Compliance:* {color:red}{*}TBD{*}{color} (?) {cloak:id=1.1.2} I'd like to get the cloak values out in the following format: requirement_num ----------------- 1.1.1 1.1.2 1.1.3 I'm looking at using UNIONs - does anyone have a better recommendation? Forgot to mention: I can't use regex, because CLR isn't enabled on the database. The numbers aren't sequencial. The current record jumps from 1.1.6 to 1.2.1

    Read the article

  • Python modules, classs, functions documentation through Sphinx

    - by user343934
    Hi everyone, I am trying to document my small project through sphinx which im recently trying to get familiar with. I read some tutorials and sphinx documentation but couldn't make it. Setup and configurations are ok! just have problems in using sphinx in a technical way. My table of content should look like this --- Overview .....Contents ----Configuration ....Contents ---- System Requirements .....Contents ---- How to use .....Contents ---- Modules ..... Index ......Display ----Help ......Content Moreover my focus is on Modules with docstrings. Details of Modules are Directory:- c:/wamp/www/project/ ----- Index.py >> Class HtmlTemplate: .... def header(): .... def body(): .... def form(): .... def header(): .... __init_main: ----- display.py >> Class MainDisplay: .... def execute(): .... def display(): .... def tree(): .... __init_main: My Documentation Directory:- c:/users/abc/Desktop/Documentation/doc/ --- _build --- _static --- _templates --- conf.py --- index.rst I have added Modules directory to the system environment and edited index.rst with following codes Welcome to Seq-alignment's documentation! Contents: .. toctree:: :maxdepth: 2 .. automodule:: index.py .. autoclass:: HtmlTemplate :members:Header,Body,Form,Footer,CloseHtml .. automodule:: display.py .. autoclass:: MainDisplay :members:execute,display,tree Indices and tables :ref:genindex :ref:modindex :ref:search When i make html file and view it, apparently i dont get Modules in the content tables but just there is show record and when i click it just i get "index.txt" version in another window. I need your suggestions Thanks

    Read the article

  • Terminate function on System.in .. possible?

    - by Ronald
    I am currently working on a project where I have to make an agent to interact with a server. Each 50ms, the server will receive the last thing I outputted to System.out and send me a new set of lines as a 'state' through the System.in printstream to analyze and send my next message to System.out. Also, if the server receives multiple outputs from me, it only regards the most recent one. .. As for my question: My program originally constructed a tree and then analyzed each leaf node to see which would be optimal, and then waited around for the next input, but I can recursively do a deeper tree search that would make my output 'better' (and again and again to keep returning a better result). Using this and the fact that if the server receives multiple outputs, it only takes the most recent one, I could do each level, print my result and start the next level. But here comes my problem... I can't be stuck in some complex algorithm while I am supposed to receiving the next input as I will then miss it. So I was wondering if there is a way to cancel anything else I am doing when I receive something via System.in and then go back to the beginning of the function and start the search again with the new set of input (and rinse and repeat..) I hope this all makes sense, Thank ye all

    Read the article

  • XPath and XML: Multiple namespaces

    - by emragins
    So I have a document that looks like <a xmlns="uri1" xmlns:pre2="uri2"> <b xmlns:pre3="uri3"> <pre3:c> <stuff></stuff> <goes></goes> <here></here> </pre3:c> <pre3:d xmlns="uri4"> <under></under> <the></the> <tree></tree> </pre3:d> </b> </a> I want an xpath expression that will get me <under>. This has a namespaceURI of uri4. Right now my expression looks like: //ns:a/ns:b/pre3:d/pre4:under I have the namespace manager add 'ns' for the default namespace (uri1 in this case) and I have it defined with pre2, pre3, and pre4 for uri2, uri3, and uri4 respectively. I get the error "Expression must evaluate to a node-set." I know that the node exists. I know that everything up until the pre4:under in my xpath works fine as I use it in the rest of the document with no issues. It's the additional pre4:under that causes the error, and I'm not sure why. Any ideas? Thanks.

    Read the article

  • Which is the event listener after doSave() in Symfony?

    - by fesja
    Hi, I've been looking at this event-listeners page http://www.doctrine-project.org/documentation/manual/1_1/pl/event-listeners and I'm not sure which is the listener I have to use to make a change after the doSave() method in the BaseModelForm.class.php. // PlaceForm.class.php protected function doSave ( $con = null ) { ... parent::doSave($con); .... // Only for new forms, insert place into the tree if($this->object->level == null){ $parent = Place::getPlace($this->getValue('parent'), Language::getLang()); ... $node = $this->object->getNode(); $method = ($node->isValidNode() ? 'move' : 'insert') . 'AsFirstChildOf'; $node->$method($parent); //calls $this->object->save internally } return; } What I want to do is to make a custom slug with the ancestors' name of that new place. So if I inserting "San Francisco", the slug would be "usa-california-san-francisco" public function postXXXXXX($event) { ... $event->getInvoker()->slug = $slug; } The problem is that I'm inserting a new object with no reference to its parent. After it's saved, I insert it to the tree. So I can't change the slug until then. I think a Transaction listener could work, but I'm use there is a better way I'm not seeing right now. thanks!

    Read the article

  • Modifying bundled properties from visitor

    - by ravenspoint
    How should I modify the bundled properties of a vertex from inside a visitor? I would like to use the simple method of sub-scripting the graph, but the graph parameter passed into the visitor is const, so compiler disallows changes. I can store a reference to the graph in the visitor, but this seems weird. /** A visitor which identifies vertices as leafs or trees */ class bfs_vis_leaf_finder:public default_bfs_visitor { public: /** Constructor @param[in] total reference to int variable to store total number of leaves @param[in] g reference to graph ( used to modify bundled properties ) */ bfs_vis_leaf_finder( int& total, graph_t& g ) : myTotal( total ), myGraph( g ) { myTotal = 0; } /** Called when the search finds a new vertex If the vertex has no children, it is a leaf and the total leaf count is incremented */ template <typename Vertex, typename Graph> void discover_vertex( Vertex u, Graph& g) { if( out_edges( u, g ).first == out_edges( u, g ).second ) { myTotal++; //g[u].myLevel = s3d::cV::leaf; myGraph[u].myLevel = s3d::cV::leaf; } else { //g[u].myLevel = s3d::cV::tree; myGraph[u].myLevel = s3d::cV::tree; } } int& myTotal; graph_t& myGraph; };

    Read the article

  • Response as mail attachment

    - by el ninho
    using (MemoryStream stream = new MemoryStream()) { compositeLink.PrintingSystem.ExportToPdf(stream); Response.Clear(); Response.Buffer = false; Response.AppendHeader("Content-Type", "application/pdf"); Response.AppendHeader("Content-Transfer-Encoding", "binary"); Response.AppendHeader("Content-Disposition", "attachment; filename=test.pdf"); Response.BinaryWrite(stream.GetBuffer()); Response.End(); } I got this working fine. Next step is to send this pdf file to mail, as attachment using (MemoryStream stream = new MemoryStream()) { compositeLink.PrintingSystem.ExportToPdf(stream); Response.Clear(); Response.Buffer = false; Response.AppendHeader("Content-Type", "application/pdf"); Response.AppendHeader("Content-Transfer-Encoding", "binary"); Response.AppendHeader("Content-Disposition", "attachment; filename=test.pdf"); Response.BinaryWrite(stream.GetBuffer()); System.Net.Mail.MailMessage message = new System.Net.Mail.MailMessage(); message.To.Add("[email protected]"); message.Subject = "Subject"; message.From = new System.Net.Mail.MailAddress("[email protected]"); message.Body = "Body"; message.Attachments.Add(Response.BinaryWrite(stream.GetBuffer())); System.Net.Mail.SmtpClient smtp = new System.Net.Mail.SmtpClient("192.168.100.100"); smtp.Send(message); Response.End(); } I have problem with this line: message.Attachments.Add(Response.BinaryWrite(stream.GetBuffer())); Any help how to get this to work? Thanks

    Read the article

  • mysql get table based on common column between two tables

    - by Zentdayn
    while trying to learn sql i came across "Learn SQL The Hard Way" and i started reading it. Everything was going fine then i thought ,as a way to practice, to make something like given example in the book (example consists in 3 tables pet,person,person_pet and the person_pet table 'links' pets to their owners). I made this: report table +----+-------------+ | id | content | +----+-------------+ | 1 | bank robbery| | 2 | invalid | | 3 | cat on tree | +----+-------------+ notes table +-----------+--------------------+ | report_id | content | +-----------+--------------------+ | 1 | they had guns | | 3 | cat was saved | +-----------+--------------------+ wanted result +-----------+--------------------+---------------+ | report_id | report_content | report_notes | +-----------+--------------------+---------------+ | 1 | bank robbery | they had guns | | 2 | invalid | null or '' | | 3 | cat on tree | cat was saved | +-----------+--------------------+---------------+ I tried a few combinations but no success. My first thought was SELECT report.id,report.content AS report_content,note.content AS note_content FROM report,note WHERE report.id = note.report_id but this only returns the ones that have a match (would not return the invalid report). after this i tried adding IF conditions but i just made it worse. My question is, is this something i will figure out after getting past basic sql or can this be done in simple way? Anyway i would appreciate any help, i pretty much lost with this. Thank you. EDIT: i have looked into related questions but havent yet found one that solves my problem. I probably need to look into other statements such as join or something to sort this out.

    Read the article

  • WPF bound object update notification

    - by Carlo
    I have a TreeView with a few objects bound to it, let's say something like this: public class House { public List<Room> Rooms { get; set; } public List<Person> People { get; set; } public House() { this.Rooms = new List<Room>(); this.People = new List<Person>(); } public void BuildRoom(string name) { this.Rooms.Add(new Room() { Name = name }); } public void DestroyRoom(string name) { this.Rooms.Remove(new Room() { Name = name }); } public void PersonEnter(string name) { this.People.Add(new Person() { Name = name }); } public void PersonLeave(string name) { this.People.Remove(new Person() { Name = name }); } } public class Room { public string Name { get; set; } } public class Person { public string Name { get; set; } } The TreeView is watching over the House object, whenever a room is built / destroyed or a person enters / leaves, my tree view updates itself to show the new state of the house (I omitted some implementation details for simplicity). What I want is to know the exact moment when this update finishes, so I can do something right there, the thing is that I created an indicator of the selected item, and when something moves, I need to update said indicator's position, that's the reason I need it exactly when the tree view updates. Let me know if you know a solution to this. Also, the code is not perfect (DestroyRoom and PersonLeave), but you get the idea. Thanks!

    Read the article

  • C++, Ifstream opens local file but not file on HTTP Server

    - by fammi
    Hi, I am using ifstream to open a file and then read from it. My program works fine when i give location of the local file on my system. for eg /root/Desktop/abc.xxx works fine But once the location is on the http server the file fails to open. for eg http://192.168.0.10/abc.xxx fails to open. Is there any alternate for ifstream when using a URL address? thanks. part of the code where having problem: bool readTillEof = (endIndex == -1) ? true : false; // Open the file in binary mode and seek to the end to determine file size ifstream file ( fileName.c_str ( ), ios::in|ios::ate|ios::binary ); if ( file.is_open ( ) ) { long size = (long) file.tellg ( ); long numBytesRead; if ( readTillEof ) { numBytesRead = size - startIndex; } else { numBytesRead = endIndex - startIndex + 1; } // Allocate a new buffer ptr to read in the file data BufferSptr buf (new Buffer ( numBytesRead ) ); mpStreamingClientEngine->SetResponseBuffer ( nextRequest, buf ); // Seek to the start index of the byte range // and read the data file.seekg ( startIndex, ios::beg ); file.read ( (char *)buf->GetData(), numBytesRead ); // Pass on the data to the SCE // and signal completion of request mpStreamingClientEngine->HandleDataReceived( nextRequest, numBytesRead); mpStreamingClientEngine->MarkRequestCompleted( nextRequest ); // Close the file file.close ( ); } else { // Report error to the Streaming Client Engine // as unable to open file AHS_ERROR ( ConnectionManager, " Error while opening file \"%s\"\n", fileName.c_str ( ) ); mpStreamingClientEngine->HandleRequestFailed( nextRequest, CONNECTION_FAILED ); } }

    Read the article

  • Could I return a FileStream as a generic interface to a file?

    - by Eric
    I'm writing a class interface that needs to return references to binary files. Typically I would provide a reference to a file as a file path. However, I'm considering storing some of the files (such as a small thumbnail) in a database directly rather then on a file system. In this case I don't want to add the extra step of reading the thumbnail out of the database onto the disc and then returning a path to the file for my program to read. I'd want to stream the image directly out of the database into my program and avoid writing anything to the disc unless the user explicit wants to save something. Would having my interface return a FileStreamor even a Imagemake sense? Then it would be up to the implementing class to determine if the source of the FileStream or Image is a file on a disc or binary data in a database. public interface MyInterface { string Thumbnail {get;} string Attachment {get;} } vs public interface MyInterface { Image Thumbnail {get;} FileStream Attachment {get;} }

    Read the article

  • JSF2: Re-render all components on page that have a given ID, without absolute paths

    - by tlind
    Is there any way in JSF 2.0/PrimeFaces of re-rendering all components (using the PrimeFaces update="id1 id2..." attribute or the <f:ajax render="..."/> tag) that have got a given ID, regardless of whether they are in the same form that contains the button triggering the AJAX re-render or not? For example, I want my button to re-render all sections on a page that visualize the user's current shopping basket. Right now, I always have to specify the absolute path to the components that I want to get updated, e.g. update=":header:basket :left-sidebar:menu:basket" which is rather impractical if the structure of the page changes (besides, I have not been able to figure out the correct path for one of these components). I already tried to implement a custom EL function like this, which traverses the component tree: update="{utilBean.findAllComponentsMatchingId('basket')}" but at the time that function is evaluated, apparently not the entire component tree has been set up as it doesn't contain the components I am looking for. How can I deal with this? There certainly must be an easy way of doing AJAX-based updates of sections of the page that are not part of the current <h:form>? Thanks!

    Read the article

  • Quickly accessing files in a 'project'

    - by bbbscarter
    Hi all. I'm looking for a way to quickly open files in my project's source tree. What I've been doing so far is adding files to the file-name-cache like so: (file-cache-add-directory-recursively (concat project-root "some/sub/folder") ".*\\.\\(py\\)$") after which I can use anything-for-files to access any file in the source tree with about 4 keystrokes. Unfortunately, this solution started falling over today. I've added another folder to the cache and emacs has started running out of memory. What's weird is that this folder contains less than 25% of files I'm adding, and yet emacs memory use goes up from 20mb to 400mb on adding just this folder. The total number of files is around 2000, so this memory use seems very high. Presumably I'm abusing the file cache. Anyway, what do other people do for this? I like this solution for its simplicity and speed; I've looked at some of the many, many project management packages for emacs and none of them really grabbed me... Thanks in advance! Simon

    Read the article

  • Simulate stochastic bipartite network based on trait values of species - in R

    - by Scott Chamberlain
    I would like to create bipartite networks in R. For example, if you have a data.frame of two types of species (that can only interact across species, not within species), and each species has a trait value (e.g., size of mouth in the predator allows who gets to eat which prey species), how do we simulate a network based on the traits of the species (that is, two species can only interact if their traits overlap in values for instance)? UPDATE: Here is a minimal example of what I am trying to do. 1) create phylogenetic tree; 2) simulate traits on the phylogeny; 3) create networks based on species trait values. # packages install.packages(c("ape","phytools")) library(ape); library(phytools) # Make phylogenetic trees tree_predator <- rcoal(10) tree_prey <- rcoal(10) # Simulate traits on each tree trait_predator <- fastBM(tree_predator) trait_prey <- fastBM(tree_prey) # Create network of predator and prey ## This is the part I can't do yet. I want to create bipartite networks, where ## predator and prey interact based on certain crriteria. For example, predator ## species A and prey species B only interact if their body size ratio is ## greater than X.

    Read the article

  • lua metatable __lt __le __eq forced boolean conversion of return value

    - by chris g.
    Overloading __eq, __lt, and __le in a metatable always converts the returning value to a boolean. Is there a way to access the actual return value? This would be used in the following little lua script to create an expression tree for an argument usage: print(_.a + _.b - _.c * _.d + _.a) -> prints "(((a+b)-(c*d))+a)" which is perfectly what I would like to have but it doesn't work for print(_.a == _.b) since the return value gets converted to a boolean ps: print should be replaced later with a function processing the expression tree -- snip from lua script -- function binop(op1,op2, event) if op1[event] then return op1[event](op1, op2) end if op2[event] then return op2[event](op1, op2) end return nil end function eq(op1, op2)return binop(op1,op2, "eq") end ... function div(op1, op2)return binop(op1,op2, "div") end function exprObj(tostr) expr = { eq = binExpr("=="), lt = binExpr("<"), le = binExpr("<="), add = binExpr("+"), sub=binExpr("-"), mul = binExpr("*"), div= binExpr("/") } setmetatable(expr, { __eq = eq, __lt = lt, __le = le, __add = add, __sub = sub, __mul = mul, __div = div, __tostring = tostr }) return expr end function binExpr(exprType) function binExprBind(lhs, rhs) return exprObj(function(op) return "(" .. tostring(lhs) .. exprType .. tostring(rhs) .. ")" end) end return binExprBind end function varExpr(obj, name) return exprObj(function() return name end) end _ = {} setmetatable(_, { __index = varExpr }) -- snap -- Modifing the lua vm IS an option, however it would be nice if I could use an official release

    Read the article

  • Image.Save(..) throws a GDI+ exception because the memory stream is closed.

    - by Pure.Krome
    Hi folks, i've got some binary data which i want to save as an image. When i try to save the image, it throws an exception if the memory stream used to create the image, was closed before the save. The reason i do this is because i'm dynamically creating images and as such .. i need to use a memory stream. this is the code: [TestMethod] public void TestMethod1() { // Grab the binary data. byte[] data = File.ReadAllBytes("Chick.jpg"); // Read in the data but do not close, before using the stream. Stream originalBinaryDataStream = new MemoryStream(data); Bitmap image = new Bitmap(originalBinaryDataStream); image.Save(@"c:\test.jpg"); originalBinaryDataStream.Dispose(); // Now lets use a nice dispose, etc... Bitmap2 image2; using (Stream originalBinaryDataStream2 = new MemoryStream(data)) { image2 = new Bitmap(originalBinaryDataStream2); } image2.Save(@"C:\temp\pewpew.jpg"); // This throws the GDI+ exception. } Does anyone have any suggestions to how i could save an image with the stream closed? I cannot rely on the developers to remember to close the stream after the image is saved. In fact, the developer would have NO IDEA that the image was generated using a memory stream (because it happens in some other code, elsewhere). I'm really confused :(

    Read the article

  • C++ - Distributing different headers than development

    - by Ben
    I was curious about doing this in C++: Lets say I have a small library that I distribute to my users. I give my clients both the binary and the associated header files that they need. For example, lets assume the following header is used in development: #include <string> ClassA { public: bool setString(const std::string & str); private: std::string str; }; Now for my question. For deployment, is there anything fundamentally wrong with me giving a 'reduced' header to my clients? For example, could I strip off the private section and simply give them this: #include <string> ClassA { public: bool setString(const std::string & str); }; My gut instinct says "yes, this is possible, but there are gotchas", so that is why I am asking this question here. If this is possible and also safe, it looks like a great way to hide private variables, and thus even avoid forward declaration in some cases. I am aware that the symbols will still be there in the binary itself, and that this is just a visibility thing at the source code level. Thanks!

    Read the article

  • How to read/write from erlang to a named pipe ?

    - by cstar
    I need my erlang application to read and write through a named pipe. Opening the named pipe as a file will fail with eisdir. I wrote the following module, but it is fragile and feels wrong in many ways. Also it fails on reading after a while. Is there a way to make it more ... elegant ? -module(port_forwarder). -export([start/2, forwarder/2]). -include("logger.hrl"). start(From, To)-> spawn(fun() -> forwarder(From, To) end). forwarder(FromFile, ToFile) -> To = open_port({spawn,"/bin/cat > " ++ ToFifo}, [binary, out, eof,{packet, 4}]), From = open_port({spawn,"/bin/cat " ++ FromFifo}, [binary, in, eof, {packet, 4}]), forwarder(From, To, nil). forwarder(From, To, Pid) -> receive {Manager, {command, Bin}} -> ?ERROR("Sending : ~p", [Bin]), To ! {self(), {command, Bin}}, forwarder(From, To, Manager); {From ,{data,Data}} -> Pid ! {self(), {data, Data}}, forwarder(From, To, Pid); E -> ?ERROR("Quitting, first message not understood : ~p", [E]) end. As you may have noticed, it's mimicking the port format in what it accepts or returns. I want it to replace a C code that will be reading the other ends of the pipes and being launched from the debugger.

    Read the article

  • upload new file first check if this file exist already in database or not then if not exist save that in database

    - by Hala Qaseam
    I'm trying to create sql database that contains Image Id (int) Imagename (varchar(50)) Image (image) and in aspx write in upload button this code: protected void btnUpload_Click(object sender, EventArgs e) { //Condition to check if the file uploaded or not if (fileuploadImage.HasFile) { //getting length of uploaded file int length = fileuploadImage.PostedFile.ContentLength; //create a byte array to store the binary image data byte[] imgbyte = new byte[length]; //store the currently selected file in memeory HttpPostedFile img = fileuploadImage.PostedFile; //set the binary data img.InputStream.Read(imgbyte, 0, length); string imagename = txtImageName.Text; //use the web.config to store the connection string SqlConnection connection = new SqlConnection(strcon); connection.Open(); SqlCommand cmd = new SqlCommand("INSERT INTO Image (ImageName,Image) VALUES (@imagename,@imagedata)", connection); cmd.Parameters.Add("@imagename", SqlDbType.VarChar, 50).Value = imagename; cmd.Parameters.Add("@imagedata", SqlDbType.Image).Value = imgbyte; int count = cmd.ExecuteNonQuery(); connection.Close(); if (count == 1) { BindGridData(); txtImageName.Text = string.Empty; ScriptManager.RegisterStartupScript(this, this.GetType(), "alertmessage", "javascript:alert('" + imagename + " image inserted successfully')", true); } } } When I'm uploading a new image I need to first check if this image already exists in database and if it doesn't exist save that in database. Please how I can do that?

    Read the article

  • Need help debugging a huge chunk of JSON data...

    - by meder
    I have a huge chunk, so large that I can't manually edit the file and need to read it in and do regex operations to see what's wrong. Basically - my server is PHP 5.1.6 and I can't update it. This features an older json_decode which is less featured than the 5.2/5.3 versions. json_decode returns NULL and json_last_error is being invoked but the function doesn't exist except in PHP 5.3 so I'm manually trying to see what's wrong. $regex = '#[^0-9"$a-zA-Z{:}().]#'; $json = preg_replace( $regex, '', $json ); $tree = json_decode ( $json, true ); var_dump($tree); // NULL A snippet of the JSON.. somewhere in the middle {"109":0,"103":1,"102":59,"101":70,"100":4299,"94":0,"50":51,"46":0,"45":0,"44":0,"43":0,"42":0,"23":0,"22":0,"18":0,"17":1,"16":1,"13":160,"8":4298}},"2":{"d":{"109":0,"103":92,"102":54,"101":53,"100":4301,"94":0,"50":4278,"49":328,"46":1,"45":0,"44":1,"43":0,"42":0,"26":0,"23":0,"22":0,"18":0,"17":1,"16":1,"8":4300},"m":{"94":1,"100":1,"26":1,"50":1,"8":1,"49":1,"18":1,"43":1,"42":1,"109":1},"c":{"\/":{"d":{"109":0,"100":4301,"94":0,"50":4278,"49":328,"43":0,"42":0,"26":0,"18":0,"8":4300}},"G":{"d":{"109":1,"100":4303,"94":1,"68":17,"50":64,"49":53,"43":1,"42":1,"34":0,"18":1,"13":2216,"11":0,"8":4302}}}},"3": The }}}} is suspicious but this probably just closes 4 nested object literals. Would appreciate any insight.

    Read the article

  • Is there a free tool which can help visualize the logic of a stored procedure in SQL Server 2008 R2?

    - by Hamish Grubijan
    I would like to be able to plot a call graph of a stored procedure. I am not interested in every detail, and I am not concerned with dynamic SQL (although it would be cool to detect it and skip it maybe or mark it as such.) I would like the tool to generate a tree for me, given the server name, db name, stored proc name, a "call tree", which includes: Parent stored procedure. Every other stored procedure that is being called as a child of the caller. Every table that is being modified (updated or deleted from) as a child of the stored proc which does it. Hopefully it is clear what I am after; if not - please do ask. If there is not a tool that can do this, then I would like to try to write one myself. Python 2.6 is my language of choice, and I would like to use standard libraries as much as possible. Any suggestions? EDIT: For the purposes of bounty Warning: SQL syntax is COMPLEX. I need something that can parse all kinds of SQL 2008, even if it looks stupid. No corner cases barred :) EDIT2: I would be OK if all I am missing is graphics.

    Read the article

  • Where to start with the development of first database driven Web App (long question)?

    - by Ryan
    Hi all, I've decided to develop a database driven web app, but I'm not sure where to start. The end goal of the project is three-fold: 1) to learn new technologies and practices, 2) deliver an unsolicited demo to management that would show how information that the company stores as office documents spread across a cumbersome network folder structure can be consolidated and made easier to access and maintain and 3) show my co-workers how Test Drive Development and prototyping via class diagrams can be very useful and reduces future maintenance headaches. I think this ends up being a basic CMS to which I have generated a set of features, see below. 1) Create a database to store the site structure (organized as a tree with a 'project group'-project structure). 2) Pull the site structure from the database and display as a tree using basic front end technologies. 3) Add administrator privileges/tools for modifying the site structure. 4) Auto create required sub pages* when an admin adds a new project. 4.1) There will be several sub pages under each project and the content for each sub page is different. 5) add user privileges for assigning read and write privileges to sub pages. What I would like to do is use Test Driven Development and class diagramming as part of my process for developing this project. My problem; I'm not sure where to start. I have read on Unit Testing and UML, but never used them in practice. Also, having never worked with databases before, how to I incorporate these items into the models and test units? Thank you all in advance for your expertise.

    Read the article

  • Why is function's length information of other shared lib in ELF?

    - by minastaros
    Our project (C++, Linux, gcc, PowerPC) consists of several shared libraries. When releasing a new version of the package, only those libs should change whose source code was actually affected. With "change" I mean absolute binary identity (the checksum over the file is compared. Different checksum - different version according to the policy). (I should mention that the whole project is always built at once, no matter if any code has changed or not per library). Usually this can by achieved by hiding private parts of the included Header files and not changing the public ones. However, there was a case where just a delete was added to the destructor of a class TableManager (in the TableManager.cpp file!) of library libTableManager.so, and yet the binary/checksum of library libB.so (which uses class TableManager ) has changed. TableManager.h: class TableManager { public: TableManager(); ~TableManager(); private: int* myPtr; } TableManager.cpp: TableManager::~TableManager() { doSomeCleanup(); delete myPtr; // this delete has been added } By inspecting libB.so with readelf --all libB.so, looking at the .dynsym section, it turned out that the length of all functions, even the dynamically used ones from other libraries, are stored in libB! It looks like this (length is the 668 in the 3rd column): 527: 00000000 668 FUNC GLOBAL DEFAULT UND _ZN12TableManagerD1Ev So my questions are: Why is the length of a function actually stored in the client lib? Wouldn't a start address be sufficient? Can this be suppressed somehow when compiling/linking of libB.so (kind of "stripping")? We would really like to reduce this degree of dependency...

    Read the article

  • How does git fetches commits associated to a file ?

    - by liadan
    I'm writing a simple parser of .git/* files. I covered almost everything, like objects, refs, pack files etc. But I have a problem. Let's say I have a big 300M repository (in a pack file) and I want to find out all the commits which changed /some/deep/inside/file file. What I'm doing now is: fetching last commit finding a file in it by: fetching parent tree finding out a tree inside recursively repeat until I get into the file additionally I'm checking hashes of each subfolders on my way to file. If one of them is the same as in commit before, I assume that file was not changed (because it's parent dir didn't change) then I store the hash of a file and fetch parent commit finding file again and check if hash change occurs if yes then original commit (i.e. one before parent) was changing a file And I repeat it over and over until I reach very first commit. This solution works, but it sucks. In worse case scenario, first search can take even 3 minutes (for 300M pack). Is there any way to speed it up ? I tried to avoid putting so large objects in memory, but right now I don't see any other way. And even that, initial memory load will take forever :( Greets and thanks for any help!

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >