Search Results

Search found 7638 results on 306 pages for 'binary tree'.

Page 105/306 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • Tarballing without git metadata

    - by zaf
    My source tree contains several directories which are using git source control and I need to tarball the whole tree excluding any references to the git metadata or custom log files. I thought I'd have a go using a combo of find/egrep/xargs/tar but somehow the tar file contains the .git directories and the *.log files. This is what I have: find -type f . | egrep -v '\.git|\.log' | xargs tar rvf ~/app.tar Can someone explain my misunderstanding here? Why is tar processing the files that find and egrep are filtering? I'm open to other techniques as well.

    Read the article

  • IE page redirect hanging

    - by 08Hawkeye
    My app does a POST to my local server to create a new DOM element, comes back and should redirect to the same page with the new element. The problem is when it gets back from the server, the app hangs for almost 2 minutes before doing the redirect. I've isolated the issue to the fact that IE seems to have trouble with my tree structure of 100+ DOM elements, and I can see in HTTPWatch that it sits in a "Blocked" call for the 2 minutes before doing the redirect. Our temporary workaround is to set the inner-html of the tree structure to an empty string before submitting, thus eliminating the heavy DOM lifting, but we shouldn't need to do this (firefox has no trouble with the redirect). Question 1: Is there a better fix for this issue? Question 2: Why does ANY page care about the content before a redirect if it's going to be refreshed anyway? Thanks yall //sw

    Read the article

  • Storing website hierarchy in Sql Server 2008

    - by Mika Kolari
    I want to store website page hierarchy in a table. What I would like to achieve is efficiently 1) resolve (last valid) item by path (e.g. "/blogs/programming/tags/asp.net,sql-server", "/blogs/programming/hello-world" ) 2) get ancestor items for breadcrump 3) edit an item without updating the whole tree of children, grand children etc. Because of the 3rd point I thought the table could be like ITEM id type slug title parentId 1 area blogs Blogs 2 blog programming Programming blog 1 3 tagsearch tags 2 4 post hello-world Hello World! 2 Could I use Sql Server's hierarchyid type somehow (especially point 1, "/blogs/programming/tags" is the last valid item)? Tree depth would usually be around 3-4. What would be the best way to achieve all this?

    Read the article

  • GDB question - how do I go through disassembled code line by line?

    - by user324994
    I'd like to go through a binary file my teacher gave me line by line to check addresses on the stack and the contents of different registers, but I'm not extremely familiar with using gdb. Although I have the C code, we're supposed to work entirely from a binary file. Here are the commands I've used so far: (gdb) file SomeCode Which gives me this message: Reading symbols from ../overflow/SomeCode ...(no debugging symbols found)...done. Then I use : (gdb) disas main which gives me all of the assembly. I wanted to set up a break point and use the "next" command, but none of the commands I tried work. Does anyone know the syntax I would use?

    Read the article

  • Placing a library part using the Revit Api

    - by ADAM
    I am using the revit api to import a family symbol. The code below is working however it loads the family into revit, and then you have to manually drag it from the familys tree or insert using the relevant family tool. Document document = commandData.Application.ActiveDocument; document.LoadFamilySymbol(fileName, name, out gotSymbol); How do i get it to the point where it is asking the user where they want it placed? (similar to when you click "load into project" when you are editing a family) so they dont have to drag it from the familys tree

    Read the article

  • How to find NSOutlineView row index when using NSTreeController

    - by velocityb0y
    I'm using an NSTreeController to manage nodes for an NSOutlineView. When the user adds a new item, I create a new object and insert it: EntityViewEntityNode *newNode = [EntityViewEntityNode nodeWithName:@"New entity" entity:newObject]; // Insert at end of group // NSIndexPath *insertAt = [pathOfGroupNode indexPathByAddingIndex:[selected.children count]]; [entityCollectionTreeController insertObject:newNode atArrangedObjectIndexPath:insertAt]; Now I'd like to open the table column for edit so the user can name the new item. This seems logical: NSInteger row = [entityCollectionOutlineView rowForItem:newNode]; [entityCollectionOutlineView editColumn:0 row:row withEvent:nil select:YES]; However, row is always -1 indicating the object isn't found. Poking around reveals that the tree controller is not actually putting my objects directly in the tree, but is wrapping them in a node object of its own. Anyone have insight into how I would go about getting a row index relative to the outline view, so I can do this (without, hopefully, enumerating everything in the outline view and figuring out the mapping back to my node?)

    Read the article

  • Binding dropdown list in a gridview edit item template

    - by Renju
    i can bind the dropdownlist in the edit item template. The drop down list is having null values. protected void grdDevelopment_RowDataBound(object sender, GridViewRowEventArgs e) { DropDownList drpBuildServers = new DropDownList(); if (grdDevelopment.EditIndex == e.Row.RowIndex) { drpBuildServers = (DropDownList)e.Row.Cells[0].FindControl("ddlBuildServers"); } } also getting an error Failed to load viewstate. The control tree into which viewstate is being loaded must match the control tree that was used to save viewstate during the previous request. For example, when adding controls dynamically, the controls added during a post-back must match the type and position of the controls added during the initial request.

    Read the article

  • XML: When to use attributes instead of child nodes?

    - by Rosarch
    For tree leaves in XML, when is it better to use attributes, and when is it better to use descendant nodes? For example, in the following XML document: <?xml version="1.0" encoding="utf-8" ?> <savedGame> <links> <link rootTagName="zombies" packageName="zombie" /> <link rootTagName="ghosts" packageName="ghost" /> <link rootTagName="players" packageName="player" /> <link rootTagName="trees" packageName="tree" /> </links> <locations> <zombies> <zombie> <positionX>41</positionX> <positionY>100</positionY> </zombie> <zombie> <positionX>55</positionX> <positionY>56</positionY> </zombie> </zombies> <ghosts> <ghost> <positionX>11</positionX> <positionY>90</positionY> </ghost> </ghosts> </locations> </savedGame> The <link> tag has attributes, but it could also be written as: <link> <rootTagName>trees</rootTagName> <packageName>tree</packageName> </link> Similarly, the location tags could be written as: <zombie positionX="55" positionY="56" /> instead of: <zombie> <positionX>55</positionX> <positionY>56</positionY> </zombie> What reasons are there to prefer one over the other? Is it just a stylistic issue? Any performance considerations?

    Read the article

  • glibc backtrace - can't redirect output to file

    - by Jason Antman
    Hi, I'm in the process of debugging a C program (that I didn't write). I have all of the internal debugging tools (a whole bunch of printf's) enabled, and I wrote a small PHP script that uses proc_open() and just grabs both stdout and stderr, and time-coordinates them in one file. At the moment, the binary is dieing with a realloc() error that's caught by glibc, and a glibc backtrace is printed, beginning with: *** glibc detected *** /sbin/rsyslogd: realloc(): invalid next size: 0x00002ace626ac910 *** Here's the thing I don't understand: I've confirmed that the PHP script is catching both stdout and stderr from the binary's process and writing them to the correct files, but this backtrace is still printed to the console. Where is this coming from? Is there some magical output channel other than stdout and stderr? Any ideas on how I go about capturing this backtrace to a file, or sending it out with stderr? Thanks, Jason

    Read the article

  • Java map / nio / NFS issue causing a VM fault: "a fault occurred in a recent unsafe memory access op

    - by Matthew Bloch
    I have written a parser class for a particular binary format (nfdump if anyone is interested) which uses java.nio's MappedByteBuffer to read through files of a few GB each. The binary format is just a series of headers and mostly fixed-size binary records, which are fed out to the called by calling nextRecord(), which pushes on the state machine, returning null when it's done. It performs well. It works on a development machine. On my production host, it can run for a few minutes or hours, but always seems to throw "java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code", fingering one of the Map.getInt, getShort methods, i.e. a read operation in the map. The uncontroversial (?) code that sets up the map is this: /** Set up the map from the given filename and position */ protected void open() throws IOException { // Set up buffer, is this all the flexibility we'll need? channel = new FileInputStream(file).getChannel(); MappedByteBuffer map1 = channel.map(FileChannel.MapMode.READ_ONLY, 0, channel.size()); map1.load(); // we want the whole thing, plus seems to reduce frequency of crashes? map = map1; // assumes the host writing the files is little-endian (x86), ought to be configurable map.order(java.nio.ByteOrder.LITTLE_ENDIAN); map.position(position); } and then I use the various map.get* methods to read shorts, ints, longs and other sequences of bytes, before hitting the end of the file and closing the map. I've never seen the exception thrown on my development host. But the significant point of difference between my production host and development is that on the former, I am reading sequences of these files over NFS (probably 6-8TB eventually, still growing). On my dev machine, I have a smaller selection of these files locally (60GB), but when it blows up on the production host it's usually well before it gets to 60GB of data. Both machines are running java 1.6.0_20-b02, though the production host is running Debian/lenny, the dev host is Ubuntu/karmic. I'm not convinced that will make any difference. Both machines have 16GB RAM, and are running with the same java heap settings. I take the view that if there is a bug in my code, there is enough of a bug in the JVM not to throw me a proper exception! But I think it is just a particular JVM implementation bug due to interactions between NFS and mmap, possibly a recurrence of 6244515 which is officially fixed. I already tried adding in a "load" call to force the MappedByteBuffer to load its contents into RAM - this seemed to delay the error in the one test run I've done, but not prevent it. Or it could be coincidence that was the longest it had gone before crashing! If you've read this far and have done this kind of thing with java.nio before, what would your instinct be? Right now mine is to rewrite it without nio :)

    Read the article

  • Best way to Fingerprint and Verify html structure.

    - by Lukas Šalkauskas
    Hello there, I just want to know what is your opinion about how to fingerprint/verify html/links structure. The problem I want to solve is: fingerprint for example 10 different sites, html pages. And after some time I want to have possibility to verify them, so is, if site has been changed, links changed, verification fails, othervise verification success. My base Idea is to analyze link structure by splitting it in some way, doing some kind of tree, and from that tree generate some kind of code. But I'm still in brainstorm stage, where I need to discuss this with someone, and know other ideas. So any ideas, algos, and suggestions would be usefull.

    Read the article

  • Best similarity metric for collaborative filtering?

    - by allclaws
    I'm trying to decide on the best similarity metric for a product recommendation system using item-based collaborative filtering. This is a shopping basket scenario where ratings are binary valued - the user has either purchased an item or not - there is no explicit rating system (eg, 5-stars). Step 1 is to compute item-to-item similarity, though I want to look at incorporating more features later on. Is the Tanimoto coefficient the best way to go for binary values? Or are there other metrics that are appropriate here? Thanks.

    Read the article

  • how to run phantomjs on heroku?

    - by mathieurip
    I am trying to run phantomjs on the heroku cedar stack. I am using a phantomjs buildpack for heroku https://github.com/stomita/heroku-buildpack-phantomjs. However I followed the instructions but still cannot make it work. When I run the command heroku run bash and type phantomjs --version it says phantomjs: command not found I read things about LD_LIBRARY_PATH that needs to be set to "/usr/local/lib:/usr/lib:/lib:/app/vendor/phantomjs/lib", this is what i did but without success. Is there something that i am missing ? Where does the buildpack install the phantomjs binary exactly ? Is there a way to know the path where the binary is ? I am using ruby 1.9.2 Thanks a lot for your help. EDIT: To be more precise, i want to combine ruby and phantomjs, so i am using this custom buildpack: https://github.com/ddollar/heroku-buildpack-multi, but when i push to heroku i get "Heroku push rejected, failed to compile Multipack app"

    Read the article

  • C++ Unlocking a std::mutex before calling std::unique_lock wait

    - by Sant Kadog
    I have a multithreaded application (using std::thread) with a manager (class Tree) that executes some piece of code on different subtrees (embedded struct SubTree) in parallel. The basic idea is that each instance of SubTree has a deque that store objects. If the deque is empty, the thread waits until a new element is inserted in the deque or the termination criteria is reached. One subtree can generate objects and push them in the deque of another subtree. For convenience, all my std::mutex, std::locks and std::variable_condition are stored in a struct called "locks". The class Tree creates some threads that run the following method (first attempt) : void Tree::launch(SubTree & st, Locks & locks ) { /* some code */ std::lock_guard<std::mutex> deque_lock(locks.deque_mutex_[st.id_]) ; // lock the access to the deque of subtree st if (st.deque_.empty()) // check that the deque is still empty { // some threads are still running, wait for them to terminate std::unique_lock<std::mutex> wait_lock(locks.restart_mutex_[st.id_]) ; locks.restart_condition_[st.id_].wait(wait_lock) ; } /* some code */ } The problem is that "deque_lock" is still locked while the thread is waiting. Hence no object can be added in the deque of the current thread by a concurrent one. So I turned the lock_guard into a unique_lock and managed the lock/unlock manually : void launch(SubTree & st, Locks & locks ) { /* some code */ std::unique_lock<std::mutex> deque_lock(locks.deque_mutex_[st.id_]) ; // lock the access to the deque of subtree st if (st.deque_.empty()) // check that the deque is still empty { deque_lock.unlock() ; // unlock the access to the deque to enable the other threads to add objects // DATA RACE : nothing must happen to the unprotected deque here !!!!!! // some threads are still running, wait for them to terminate std::unique_lock<std::mutex> wait_lock(locks.restart_mutex_[st.id_]) ; locks.restart_condition_[st.id_].wait(wait_lock) ; } /* some code */ } The problem now, is that there is a data race, and I would like to make sure that the "wait" instruction is performed directly after the "deque_lock.unlock()" one. Would anyone know a way to create such a critical instruction sequence with the standard library ? Thanks in advance.

    Read the article

  • How to set up linux watchdog daemon with Intel 6300esb

    - by ACiD GRiM
    I've been searching for this on Google for sometime now and I have yet to find proper documentation on how to connect the kernel driver for my 6300esb watchdog timer to /dev/watchdog and ensure that watchdog daemon is keeping it alive. I am using RHEL compatible Scientific Linux 6.3 in a KVM virtual machine by the way Below is everything I've tried so far: dmesg|grep 6300 i6300ESB timer: Intel 6300ESB WatchDog Timer Driver v0.04 i6300ESB timer: initialized (0xffffc900008b8000). heartbeat=30 sec (nowayout=0) | ll /dev/watchdog crw-rw----. 1 root root 10, 130 Sep 22 22:25 /dev/watchdog | /etc/watchdog.conf #ping = 172.31.14.1 #ping = 172.26.1.255 #interface = eth0 file = /var/log/messages #change = 1407 # Uncomment to enable test. Setting one of these values to '0' disables it. # These values will hopefully never reboot your machine during normal use # (if your machine is really hung, the loadavg will go much higher than 25) max-load-1 = 24 max-load-5 = 18 max-load-15 = 12 # Note that this is the number of pages! # To get the real size, check how large the pagesize is on your machine. #min-memory = 1 #repair-binary = /usr/sbin/repair #test-binary = #test-timeout = watchdog-device = /dev/watchdog # Defaults compiled into the binary #temperature-device = #max-temperature = 120 # Defaults compiled into the binary #admin = root interval = 10 #logtick = 1 # This greatly decreases the chance that watchdog won't be scheduled before # your machine is really loaded realtime = yes priority = 1 # Check if syslogd is still running by enabling the following line #pidfile = /var/run/syslogd.pid Now maybe I'm not testing it correctly, but I would expecting that stopping the watchdog service would cause the /dev/watchdog to time out after 30 seconds and I should see the host reboot, however this does not happen. Also, here is my config for the KVM vm <!-- WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE OVERWRITTEN AND LOST. Changes to this xml configuration should be made using: virsh edit sl6template or other application using the libvirt API. --> <domain type='kvm'> <name>sl6template</name> <uuid>960d0ac2-2e6a-5efa-87a3-6bb779e15b6a</uuid> <memory unit='KiB'>262144</memory> <currentMemory unit='KiB'>262144</currentMemory> <vcpu placement='static'>1</vcpu> <os> <type arch='x86_64' machine='rhel6.3.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>Westmere</model> <vendor>Intel</vendor> <feature policy='require' name='tm2'/> <feature policy='require' name='est'/> <feature policy='require' name='vmx'/> <feature policy='require' name='ds'/> <feature policy='require' name='smx'/> <feature policy='require' name='ss'/> <feature policy='require' name='vme'/> <feature policy='require' name='dtes64'/> <feature policy='require' name='rdtscp'/> <feature policy='require' name='ht'/> <feature policy='require' name='dca'/> <feature policy='require' name='pbe'/> <feature policy='require' name='tm'/> <feature policy='require' name='pdcm'/> <feature policy='require' name='pdpe1gb'/> <feature policy='require' name='ds_cpl'/> <feature policy='require' name='pclmuldq'/> <feature policy='require' name='xtpr'/> <feature policy='require' name='acpi'/> <feature policy='require' name='monitor'/> <feature policy='require' name='aes'/> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/mnt/data/vms/sl6template.img'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <controller type='usb' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:44:57:f6'/> <source bridge='br0.2'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <interface type='bridge'> <mac address='52:54:00:88:0f:42'/> <source bridge='br1'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <watchdog model='i6300esb' action='reset'> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </watchdog> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </memballoon> </devices> </domain> Any help is appreciated as the most I've found are patches to kvm and general softdog documentation or IPMI watchdog answers.

    Read the article

  • Difference between halo and mx namespace

    - by Andree
    Hi there ! As far as I know, the support for library://ns.adobe.com/flex/halo namespace has been dropped, and now we have to use library://ns.adobe.com/flex/mx instead (reference). Can someone provide if there's any difference between the two namespaces? I am just starting to learn Flex and this change make me confused. For example, if I have an <mx:Tree> tag in my mxml document, the compiler complains that <mx:Tree> could not be resolved to a component implementation. But if I change my mx namespace to use the old one instead (halo), it successfully compiled without error. Thanks. Andree Updated: By the way, I use Flex SDK command line compiler in Windows. mxmlc --version Version 4.0.0 build 10485

    Read the article

  • [wxWidgets] How to store wxImage into database, using C++?

    - by Thomas Matthews
    I have some wxImages and I would like to store them into a BLOB (Binary Large OBject) field in a MySQL database. There are no methods in wxImage nor wxBitmap for obtaining the binary data as an array of unsigned char so I can load into the database. My current workaround is to write the image to a temporary file, then load the BLOB field directly from the file. Is there a more efficient method to load and store a wxImage object into a MySQL BLOB field? I am using MySql C++ connector 1.05, MS Visual Studio 2008, wxWidgets and C++.

    Read the article

  • How to structure class to support imported 3d model ?

    - by brainydexter
    Hello, I've written a C++ library that reads in this 3d model file (collada DAE). uptil now, I would output a list of triangles and handle each at rendering stage. But now, I need to attach some Bounding sphere information with the imported model. I need some advice on how should I organize this in code. Here are some specs of the 3D file format: - 3D model is represented as a Tree consisting of nodes - each node can contain other nodes, geometry information, transformation etc My requirements: - a bounding sphere associated with each node, thereby yielding a tree of bounding sphere hierarchy for the model itself. - actual vertex information What would be the recommended way to deal with this situation? Thanks

    Read the article

  • String of KML needs to be converted to java objects

    - by spartikus
    I have a string of kml coming in on a request object. I have used xjc to create the kml java objects. I am looking for an easy way to create the kml nested java objects from this string. I could parse the string and create each object in the tree by hand but wouldn't it be cool if there was a library or something that would create the java objects for me? Something like KmlType type = parseKML(mykmlStringFromTheRequest); Then type would be a Tree of kml objects. Thanks for the help all.

    Read the article

  • python lxml problem

    - by David ???
    I'm trying to print/save a certain element's HTML from a web-page. I've retrieved the requested element's XPath from firebug. All I wish is to save this element to a file. I don't seem to succeed in doing so. (tried the XPath with and without a /text() at the end) I would appreciate any help, or past experience. 10x, David import urllib2,StringIO from lxml import etree url='http://www.tutiempo.net/en/Climate/Londres_Heathrow_Airport/12-2009/37720.htm' seite = urllib2.urlopen(url) html = seite.read() seite.close() parser = etree.HTMLParser() tree = etree.parse(StringIO.StringIO(html), parser) xpath = "/html/body/table/tbody/tr/td[2]/div/table/tbody/tr[6]/td/table/tbody/tr/td[3]/table/tbody/tr[3]/td/table/tbody/tr/td/table/tbody/tr/td/table/tbody/text()" elem = tree.xpath(xpath) print elem[0].strip().encode("utf-8")

    Read the article

  • Why does IE prompt a security warning when viewing an XML file?

    - by Tav
    Opening an XML file in Internet explorer gives a security warning. IE has a nice collapsible tree view for viewing XML, but it's disabled by default and you get this scary error message about a potential security hole. http://www.leonmeijer.nl/archive/2008/04/27/106.aspx But why? How can simply viewing an XML file (not running any embedded macros in it or anything) possibly be a security hole? Sure, I get that running XSLT could potentially do some bad stuff, but we're not talking about executing anything. We're talking about viewing. Why can't IE simply display the XML file as text (plus with the collapsible tree viewer)? So why did they label this as a security hole? Can someone describe how simply viewing an XML document could be used as an attack document?

    Read the article

  • An Efficient data structure for Sorted List

    - by holydiver
    I want to save my objects according to a key in the attributes of my object in a sorted fashion. Later on I'll access these objects sequentially from max key to min key. I'll do some search tasks as well. I consider to use either AVL tree or RB Tree. As far as i know they are nearly equivalent in theory(Both have O(logn)). But in practice which one might be better in performance in my situation. And is there a better alternative than those, considering that I'll mostly do insert and sequentially access to the ds.

    Read the article

  • Twitter xAuth vs open source

    - by Yorirou
    Hi I am developing an open source desktop twitter client. I would like to take advantage on the new xAuth authentication method, however my app is open source which means that if I put the keys directly into the source file, it may be a vulnerability (am I correct? The twitter support guy told me). On the other hand, putting the key directly into a binary also doesn't make sense. I am writing my application in python, so if I just supply the pyc files, it is one more seconds to get the keys, thanks to the excellent reflection capatibilities of Python. If I create a small .so file with the keys, it is also trivial to obtain the key by looking at the raw binary (keys has fixed length and character set). What is your opinion? Is it really a secutiry hole to expose the API keys?

    Read the article

  • TreeNodes don't get collected with weakevent solution

    - by Marcus
    Hi, When I use this method http://stackoverflow.com/questions/1089309/weak-events-in-net (by Egor) to hook up a event i a inherited treenode, the tree node never gets collected, is there any speciall case with tree nodes and GC? public class MyTreeNode : TreeNode { public MyTreeNode(Entity entity) { entity.Children.ListChanged += new ListChangedEventHandler(entityChildren_ListChanged).MakeWeak(eh => entity.Children.ListChanged -= eh); } } Entity.Children is a bindinglist. I made tests with a destructor on MyTreeNode and invoking GC.Collect(), with the weak eventhandler the treenode never gets collected but i DOES get collected WIHTOUT the weak eventhandler.

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >