Search Results

Search found 1157 results on 47 pages for 'recursive descent'.

Page 17/47 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • How does a segment based rendering engine work?

    - by Calmarius
    As far as I know Descent was one of the first games that featured a fully 3D environment, and it used a segment based rendering engine. Its levels are built from cubic segments (these cubes may be deformed as long as it remains convex and sides remain roughly flat). These cubes are connected by their sides. The connected sides are traversable (maybe doors or grids can be placed on these sides), while the unconnected sides are not traversable walls. So the game is played inside of this complex. Descent was software rendered and it had to be very fast, to be playable on those 10-100MHz processors of that age. Some latter levels of the game are huge and contain thousands of segments, but these levels are still rendered reasonably fast. So I think they tried to minimize the amount of cubes rendered somehow. How to choose which cubes to render for a given location? As far as I know they used a kind of portal rendering, but I couldn't find what was the technique used in this particular kind of engine. I think the fact that the levels are built from convex quadrilateral hexahedrons can be exploited.

    Read the article

  • How to undo a changeset using tf.exe rollback

    - by Tarun Arora
    Technorati Tags: Team Foundation Server 2010,Team Foundation Utilities,TFS2010   Oh no! Did you just check in a changeset in to TFS and realized that you need to roll back the changeset because the changes were suppose to go in a different branch? Or did you just accidently merge a wrong changeset in your release branch? There are several ways to undo the damage, Manual: Yes, we all just hate this word but for the record you could manually rollback the changes. Get Specific version on the branch and chose the changeset prior to the one you checked in. After that check out all the files in the changeset and check them in. During the check in you will receive a conflict. At this point choose ‘Keep local changes’ in the conflict resolution window and check in the files. Automated: Yes, we just love it! TFS comes with a very powerful command line utility ‘tf.exe’ that gives you the ability to rollback the effects of one or more changesets to one or more version-controlled items. This command does not remove the changesets from an item's version history. Instead, this command creates in your workspace a set of pending changes that negate the effects of the changesets that you specify. Syntax tf rollback /toversion:VersionSpec ItemSpec [/recursive] [/lock:none|checkin|checkout] [/version:versionspec] [/keepmergehistory] [/login:username,[password]] [/noprompt] tf rollback /changeset:ChangesetFrom~ChangesetTo [ItemSpec] [/recursive] [/lock:none|checkin|checkout] [/version:VersionSpec] [/keepmergehistory] [/noprompt] [/login:username,[password]]   I’ll explain this with an example. Your workspace is at the location C:\myWorkspace You want to rollback changeset # 145621 C:\Workspace\MyBranch>tf.exe rollback /changeset:145621 /recursive How do i rollback/undo a series of changesets? You can also rollback a range of changesets by using the following C:\Workspace\MyBranch>tf.exe rollback /changeset:145601~145621 /recursive This will check out the files in the version control and you should be able to see them in the pending changes. Go on check them in to undo the specific changeset that you just rolled back. Do you completely want to get rid of the changeset from all future merges between the two branches? /KeepMergeHistory: This option has an effect only if one or more of the changesets that you are rolling back include a branch or merge change. Specify this option if you want future merges between the same source and the same target to exclude the changes that you are rolling back. Errors “If you get the message ‘Unable to determine the workspace.’ You may be able to correct this by running ‘tf worksapces /collection:TeamProjectCollectionUrl’” you are in the wrong directory. Make sure that you run the ‘tf rollback’ command from the directory of your workspace.   Status Exit Code Description 0 The operation rolled back all items successfully. 1 The operation rolled back at least one item successfully but could not roll back one or more items. 100 The operation could not roll back any items.   To use the command you must have the Read, Check Out, and Check In permissions set to Allow. So, have you been in a rollback undo situation before?   Share this post :

    Read the article

  • cakephp paginate using mysql SQL_CALC_FOUND_ROWS

    - by michael
    I'm trying to make Cakephp paginate take advantage of the SQL_CALC_FOUND_ROWS feature in mysql to return a count of total rows while using LIMIT. Hopefully, this can eliminate the double query of paginateCount(), then paginate(). http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_found-rows I've put this in my app_model.php, and it basically works, but it could be done better. Can someone help me figure out how to override paginate/paginateCount so it executes only 1 SQL stmt? /** * Overridden paginateCount method, to using SQL_CALC_FOUND_ROWS */ public function paginateCount($conditions = null, $recursive = 0, $extra = array()) { $options = array_merge(compact('conditions', 'recursive'), $extra); $options['fields'] = "SQL_CALC_FOUND_ROWS `{$this->alias}`.*"; Q: how do you get the SAME limit value used in paginate()? $options['limit'] = 1; // ideally, should return same rows as in paginate() Q: can we somehow get the query results to paginate directly, without the extra query? $cache_results_from_paginateCount = $this->find('all', $options); /* * note: we must run the query to get the count, but it will be cached for multiple paginates, so add comment to query */ $found_rows = $this->query("/* do not cache for {$this->alias} */ SELECT FOUND_ROWS();"); $count = array_shift($found_rows[0][0]); return $count; } /** * Overridden paginate method */ public function paginate($conditions, $fields, $order, $limit, $page = 1, $recursive = null, $extra = array()) { $options = array_merge(compact('conditions', 'fields', 'order', 'limit', 'page', 'recursive'), $extra); Q: can we somehow get $cache_results_for_paginate directly from paginateCount()? return $cache_results_from_paginateCount; // paginate in only 1 SQL stmt return $this->find('all', $options); // ideally, we could skip this call entirely }

    Read the article

  • ?Oracle????SELECT????UNDO

    - by Liu Maclean(???)
    ????????Oracle?????(dirty read),?Oracle??????Asktom????????Oracle???????, ???undo??????????(before image)??????Consistent, ???????????????Oracle????????????? ????????? ??,??,Oracle?????????????RDBMS,???????????? ?????????2?????: _offline_rollback_segments or _corrupted_rollback_segments ?2?????????Oracle???????????ORA-600[4XXX]???????????????,???2??????Undo??Corruption????????????,?????2????????????????? ??????????????_offline_rollback_segments ? _corrupted_rollback_segments ?2?????: ???????(FORCE OPEN DATABASE) ????????????(consistent read & delayed block cleanout) ??????rollback segment??? ?????:???????Oracle????????,??????????2?????,?????????????!! _offline_rollback_segments ? _corrupted_rollback_segments ???????????: ??2???????Undo Segments(???/???)????????online ?UNDO$???????????OFFLINE??? ???instance??????????????????? ??????Undo Segments????????active transaction????????????dead??SMON???(????????SMON??(?):Recover Dead transaction) _OFFLINE_ROLLBACK_SEGMENTS(offline undo segment list)????(hidden parameter)?????: ???startup???open database???????_OFFLINE_ROLLBACK_SEGMENTS????Undo segments(???/???),?????undo segments????????alert.log???TRACE?????,???????startup?? ?????????????,?ITL?????undo segments?: ???undo segments?transaction table?????????????????? ???????????commit,?????CR??? ????undo segments????(???corrupted??,???missed??)???????????alert.log,??????? ?DML?????????????????????????????????CPU,????????????????????? _CORRUPTED_ROLLBACK_SEGMENTS(corrupted undo segment list)??????????: ?????startup?open database???_CORRUPTED_ROLLBACK_SEGMENTS????undo segments(???/???)???????? ???????_CORRUPTED_ROLLBACK_SEGMENTS???undo segments????????????commit,???undo segments???drop??? ??????????? ??????????????????,?????????????????? ??bootstrap???????????,?????????ORA-00704: bootstrap process failure??,???????????(???Oracle????:??ORA-00600:[4000] ORA-00704: bootstrap process failure????) ??????_CORRUPTED_ROLLBACK_SEGMENTS????????????????????,??????????????? Oracle???????TXChecker??????????? ???????2?????,??????????????_CORRUPTED_ROLLBACK_SEGMENTS?????SELECT????UNDO???????: SQL> alter system set event= '10513 trace name context forever, level 2' scope=spfile; System altered. SQL> alter system set "_in_memory_undo"=false scope=spfile; System altered. 10513 level 2 event????SMON ??rollback ??? dead transaction _in_memory_undo ?? in memory undo ?? SQL> startup force; ORACLE instance started. Total System Global Area 3140026368 bytes Fixed Size 2232472 bytes Variable Size 1795166056 bytes Database Buffers 1325400064 bytes Redo Buffers 17227776 bytes Database mounted. Database opened. session A: SQL> conn maclean/maclean Connected. SQL> create table maclean tablespace users as select 1 t1 from dual connect by level exec dbms_stats.gather_table_stats('','MACLEAN'); PL/SQL procedure successfully completed. SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 501 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 1 recursive calls 0 db block gets 3 consistent gets 0 physical reads 0 redo size 515 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processe ???????????,????current block, ????????,consistent gets??3? SQL> update maclean set t1=0; 501 rows updated. SQL> alter system checkpoint; System altered. ??session A?commit; ???? session: SQL> conn maclean/maclean Connected. SQL> SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 501 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 505 consistent gets 0 physical reads 108 redo size 515 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed ?????? ?????????undo??CR?,???consistent gets??? 505 [oracle@vrh8 ~]$ ps -ef|grep LOCAL=YES |grep -v grep oracle 5841 5839 0 09:17 ? 00:00:00 oracleG10R25 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) [oracle@vrh8 ~]$ kill -9 5841 ??session A???Server Process????,???dead transaction ????smon?? select ktuxeusn, to_char(sysdate, 'DD-MON-YYYY HH24:MI:SS') "Time", ktuxesiz, ktuxesta from x$ktuxe where ktuxecfl = 'DEAD'; KTUXEUSN Time KTUXESIZ KTUXESTA ---------- -------------------- ---------- ---------------- 2 06-AUG-2012 09:20:45 7 ACTIVE ???1?active rollback segment SQL> conn maclean/maclean Connected. SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 501 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 411 consistent gets 0 physical reads 108 redo size 515 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed ????? ????kill?? ???smon ??dead transaction , ???????????? ?????undo??????? ????active?rollback segment??? SQL> select segment_name from dba_rollback_segs where segment_id=2; SEGMENT_NAME ------------------------------ _SYSSMU2$ SQL> alter system set "_corrupted_rollback_segments"='_SYSSMU2$' scope=spfile; System altered. ? _corrupted_rollback_segments ?? ???2?rollback segment, ????????undo SQL> startup force; ORACLE instance started. Total System Global Area 3140026368 bytes Fixed Size 2232472 bytes Variable Size 1795166056 bytes Database Buffers 1325400064 bytes Redo Buffers 17227776 bytes Database mounted. Database opened. SQL> conn maclean/maclean Connected. SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 94 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 228 recursive calls 0 db block gets 29 consistent gets 5 physical reads 116 redo size 514 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 4 sorts (memory) 0 sorts (disk) 1 rows processed SQL> / SUM(T1) ---------- 94 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 3 consistent gets 0 physical reads 0 redo size 514 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed ?????? consistent gets???3,?????????????????,??ITL???UNDO SEGMENTS?_corrupted_rollback_segments????,???????????COMMIT??,????UNDO? ???????,?????????????????????????(????????????????????),????????????????? ???? , ?????

    Read the article

  • install software in linux as a non root user

    - by Aki
    Hi, What is the best way to install software in a linux machine if you dont have root permissions. I know that we can use few variables like PKG_CONFIG_PATH and switches like --prefix with configure to get a software installed in a local directory, but sometimes when there are recursive dependencies it is becoming tough for me to install all the packages manually. Is there a better automated way? Update: What i meant by recursive dependencies is: to install package A, i should install package B, which in turn requires package C to be installed

    Read the article

  • Can someone describe through code a practical example of backtracking with iteration instead of recursion?

    - by chrisapotek
    Recursion makes backtracking easy as it guarantees that you won't go through the same path again. So all ramifications of your path are visited just once. I am trying to convert a backtracking tail-recursive (with accumulators) algorithm to iteration. I heard it is supposed to be easy to convert a perfectly tail-recursive algorithm to iteration. But I am stuck in the backtracking part. Can anyone provide a example through code so myself and others can visualize how backtracking is done? I would think that a STACK is not needed here because I have a perfectly tail-recursive algorithm using accumulators, but I can be wrong here.

    Read the article

  • How do you write an idiomatic Scala Quicksort function?

    - by Don Mackenzie
    I recently answered a question with an attempt at writing a quicksort function in scala, I'd seen something like the code below written somewhere. def qsort(l: List[Int]): List[Int] = { l match { case Nil => Nil case pivot::tail => qsort(tail.filter(_ < pivot)) ::: pivot :: qsort(tail.filter(_ >= pivot)) } } My answer received some constructive criticism pointing out that List was a poor choice of collection for quicksort and secondly that the above wasn't tail recursive. I tried to re-write the above in a tail recursive manner but didn't have much luck. Is it possible to write a tail recursive quicksort? or, if not, how can it be done in a functional style? Also what can be done to maximise the efficiency of the implementation? Thanks in advance.

    Read the article

  • How do I compile lbflow 1.1?

    - by Ali.A
    After using "sh ./configure" command, I encountered another error during lbflow package installation (a scientific one). The sequence of operations is here with error: ./configure --disable-gts sudo make [sudo] password for alireza: make all-recursive make[1]: Entering directory `/home/alireza/lbflow-1.1' Making all in src make[2]: Entering directory `/home/alireza/lbflow-1.1/src' source='lbflow.cpp' object='lbflow-lbflow.o' libtool=no \ DEPDIR=.deps depmode=none /bin/bash ../depcomp \ g++ -DHAVE_CONFIG_H -I. -I. -I.. -c -o lbflow-lbflow.o `test -f 'lbflow.cpp' || echo './'`lbflow.cpp **../depcomp: line 432: exec: g++: not found** **make[2]: *** [lbflow-lbflow.o] Error 127 make[2]: Leaving directory `/home/alireza/lbflow-1.1/src' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/alireza/lbflow-1.1' make: *** [all] Error 2** Do you have any idea to troubleshoot this problem? (And please notice that i have installed both g++ and gcc. it says g++: not found, but i have installed g++ from Ubuntu Software Center!)

    Read the article

  • What if we run out of stack space in C# or Python?

    - by dotneteer
    Supposing we are running a recursive algorithm on a very large data set that requires, say, 1 million recursive calls. Normally, one would solve such a large problem by converting recursion to a loop and a stack, but what if we do not want to or cannot rewrite the algorithm? Python has the sys.setrecursionlimit(int) method to set the number of recursions. However, this is only part of the story; the program can still run our of stack space. C# does not have a equivalent method. Fortunately, both C# and Python have option to set the stack space when creating a thread. In C#, there is an overloaded constructor for the Thread class that accepts a parameter for the stack size: Thread t = new Thread(work, stackSize); In Python, we can set the stack size by calling: threading.stack_size(67108864) We can then run our work under a new thread with increased stack size.

    Read the article

  • How can I test a parser for a bespoke XML schema?

    - by Greg B
    I'm parsing a bespoke XML format into an object graph using .NET 4.0. My parser is using the System.XML namespace internally, I'm then interrogating the relevant properties of XmlNodes to create my object graph. I've got a first cut of the parser working on a basic input file and I want to put some unit tests around this before I progress on to more complex input files. Is there a pattern for how to test a parser such as this? When I started looking at this, my first move was to new up and XmlDocument, XmlNamespaceManager and create an XmlElement. But it occurs to me that this is quite lengthy and prone to human error. My parser is quite recursive as you can imagine and this might lead to testing the full system rather than the individual units (methods) of the system. So a second question might be What refactoring might make a recursive parser more testable?

    Read the article

  • Faking Display tree (Sprite) parent child relationships with rasters (BitmapData) in ActionScript 3

    - by Arthur Wulf White
    I am working with Rasters (bitmapData) and blliting (copypixels) in a 2d-game in actionscript 3. I like how you can move a sprite and it moves all it's children, and you can simultaneously move the children creating an interesting visual effect. I do not want to use rotation or scaling however cause I do not know how that can be done without hampering with performance. So I'm not simulating Sprite parent-child behavior and sticking to the movement on the (x, y) axis. What I am planning to do is create a class called RasterContainer which extends bitmapData that has a vector of children of type Raster(extending bitmapData), now I am planning to implement recursive rendering in RasterContainer, that basically copyPixels every child, only changing their (x, y) offset to reflect their parent's offset. My question is, has this been implemented in an existing framework? Is this a good plan? Do I expect a serious performance hit for using recursive methods this way?

    Read the article

  • Samba user does not have folder read permission

    - by user289455
    I have set up a special user for read only samba shares. I set him up in Samba and as a system user. I shared a couple of folders but that user cannot access them. I know samba is working because I also shared them with the main user of the system which is an admin account and it works fine. How can I allow this user to have read permissions on all the directories I want to share without changing anything for any other users of the system? For example, I don't want to give him ownership of any of the files/directories. Just ongoing recursive read access. ongoing recursive is important. If someone adds a file or directory, I still want him to automatically be able to read it.

    Read the article

  • Installing Monodevelop from the SVN on Ubuntu 10.04

    - by celil
    I wrote the following script to install the svn version of MonoDevelop #!/usr/bin/env bash PREFIX=/opt/local check_errs() { if [[ $? -ne 0 ]]; then echo "${1}" exit 1 fi } download() { if [ ! -d ${1} ] then svn co http://anonsvn.mono-project.com/source/trunk/${1} else (cd ${1}; svn update) fi } download mono download mcs download libgdiplus ( cd mono ./autogen.sh --prefix=$PREFIX make make install check_errs ) ( cd libgdiplus ./autogen.sh --prefix=$PREFIX make make install check_errs ) download monodevelop export PKG_CONFIG_PATH=${PREFIX}/lib/pkgconfig ( cd monodevelop ./configure --prefix=$PREFIX --select check_errs make check_errs ) Everything works fine until the last make step for the monodevelop pacakge, where the script exits with the error: ./MonoDevelop.WebReferences/MoonlightChannelBaseExtension.cs(320,82): error CS1061: Type `System.ServiceModel.Description.OperationContractGenerationContext' does not contain a definition for `SyncMethod' and no extension method `SyncMethod' of type `System.ServiceModel.Description.OperationContractGenerationContext' could be found (are you missing a using directive or an assembly reference?) ./MonoDevelop.WebReferences/MoonlightChannelBaseExtension.cs(325,49): error CS1061: Type `System.ServiceModel.Description.OperationContractGenerationContext' does not contain a definition for `SyncMethod' and no extension method `SyncMethod' of type `System.ServiceModel.Description.OperationContractGenerationContext' could be found (are you missing a using directive or an assembly reference?) ./MonoDevelop.WebReferences/MoonlightChannelBaseExtension.cs(345,115): error CS1061: Type `System.ServiceModel.Description.OperationContractGenerationContext' does not contain a definition for `SyncMethod' and no extension method `SyncMethod' of type `System.ServiceModel.Description.OperationContractGenerationContext' could be found (are you missing a using directive or an assembly reference?) ./MonoDevelop.WebReferences/MoonlightChannelBaseExtension.cs(365,82): error CS1061: Type `System.ServiceModel.Description.OperationContractGenerationContext' does not contain a definition for `BeginMethod' and no extension method `BeginMethod' of type `System.ServiceModel.Description.OperationContractGenerationContext' could be found (are you missing a using directive or an assembly reference?) Compilation failed: 4 error(s), 1 warnings make[4]: *** [../../../build/AddIns/MonoDevelop.WebReferences/MonoDevelop.WebReferences.dll] Error 1 make[4]: Leaving directory `/home/drufat/Desktop/Checkout/mono/monodevelop/main/src/addins/MonoDevelop.WebReferences' make[3]: *** [all-recursive] Error 1 make[3]: Leaving directory `/home/drufat/Desktop/Checkout/mono/monodevelop/main/src/addins' make[2]: *** [all-recursive] Error 1 make[2]: Leaving directory `/home/drufat/Desktop/Checkout/mono/monodevelop/main/src' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/drufat/Desktop/Checkout/mono/monodevelop/main' make: *** [all-recursive] Error 1 Any ideas on how to fix this? I suppose the build gets mixed up with the default installation of mono in Ubuntu, and is looking for a symbol that is not present there. My build configuration looks as follows: 1. [X] main 2. [ ] extras/JavaBinding 3. [ ] extras/BooBinding 4. [X] extras/ValaBinding 5. [ ] extras/AspNetEdit 6. [ ] extras/GeckoWebBrowser 7. [ ] extras/WebKitWebBrowser 8. [ ] extras/MonoDevelop.Database 9. [ ] extras/MonoDevelop.Profiling 10. [ ] extras/MonoDevelop.AddinAuthoring 11. [ ] extras/MonoDevelop.CodeAnalysis 12. [ ] extras/MonoDevelop.Debugger.Mdb 13. [ ] extras/MonoDevelop.Debugger.Gdb 14. [ ] extras/PyBinding 15. [ ] extras/MonoDevelop.IPhone 16. [ ] extras/MonoDevelop.MeeGo

    Read the article

  • Selecting merge strategy options for git rebase

    - by porneL
    git-rebase man page mentions -X<option> can be passed to git-merge. When/how exactly? I'd like to rebase by applying patches with recursive strategy and theirs option (apply whatever sticks, rather than skipping entire conflicting commits). I don't want merge, I want to make history linear. I've tried: git rebase -Xtheirs and git rebase -s 'recursive -Xtheirs' but git rejects -X in both cases.

    Read the article

  • Eclipse CDT Linuxtools gives broken pipe error

    - by ole
    I am running Eclipse CDT 6.0.2 on a SLES 11 x86_64 platform. My project is of linuxtools type. I am getting the following error running builds: " ../libtool: line 747: echo: write error: Broken pipe make[2]: write error make[1]: *** [all recursive] Error 1 make[1]: write error make: *** [all recursive] Error 1 " Any help is appreciated.

    Read the article

  • Extracting specific files from ZIP in PHP

    - by Michael
    If I have a ZIP file whose structure is: -directory1 DIR -files in here -directory2 DIR -more files in here Using pclzip.lib.php how can I open up this ZIP file and extract directory1 (recursive) into a directory and then extract directory2 (recursive) into another directory.

    Read the article

  • Git ignore all folders apart from

    - by digital
    I want to ignore all the files in my folder structure apart from the following conditions: profiles (and all folders/files recursive) sites/xxx (and all folders/files recursive) Currently my gitignore file looks like: `*` !sites/xxx !sites/xxx/modules !sites/xxx/modules/* !profiles !profiles/xxx !profiles/xxx/* This doesn't allow me to track sites/xxx/modules/new though, is there anyway round this.

    Read the article

  • Octree subdivision problem

    - by ChaosDev
    Im creating octree manually and want function for effectively divide all nodes and their subnodes - For example - I press button and subnodes divided - press again - all subnodes divided again. Must be like - 1 - 8 - 64. The problem is - i dont understand how organize recursive loops for that. OctreeNode in my unoptimized implementation contain pointers to subnodes(childs),parent,extra vector(contains dublicates of child),generation info and lots of information for drawing. class gOctreeNode { //necessary fields gOctreeNode* FrontBottomLeftNode; gOctreeNode* FrontBottomRightNode; gOctreeNode* FrontTopLeftNode; gOctreeNode* FrontTopRightNode; gOctreeNode* BackBottomLeftNode; gOctreeNode* BackBottomRightNode; gOctreeNode* BackTopLeftNode; gOctreeNode* BackTopRightNode; gOctreeNode* mParentNode; std::vector<gOctreeNode*> m_ChildsVector; UINT mGeneration; bool mSplitted; bool isSplitted(){return m_Splitted;} .... //unnecessary fields }; DivideNode of Octree class fill these fields, set mSplitted to true, and prepare for correctly drawing. Octree contains basic nodes(m_nodes). Basic node can be divided, but now I want recursivly divide already divided basic node with 8 subnodes. So I write this function. void DivideAllChildCells(int ix,int ih,int id) { std::vector<gOctreeNode*> nlist; std::vector<gOctreeNode*> dlist; int index = (ix * m_Height * m_Depth) + (ih * m_Depth) + (id * 1);//get index of specified node gOctreeNode* baseNode = m_nodes[index].get(); nlist.push_back(baseNode->FrontTopLeftNode); nlist.push_back(baseNode->FrontTopRightNode); nlist.push_back(baseNode->FrontBottomLeftNode); nlist.push_back(baseNode->FrontBottomRightNode); nlist.push_back(baseNode->BackBottomLeftNode); nlist.push_back(baseNode->BackBottomRightNode); nlist.push_back(baseNode->BackTopLeftNode); nlist.push_back(baseNode->BackTopRightNode); bool cont = true; UINT d = 0;//additional recursive loop param (?) UINT g = 0;//additional recursive loop param (?) LoopNodes(d,g,nlist,dlist); //Divide resulting nodes for(UINT i = 0; i < dlist.size(); i++) { DivideNode(dlist[i]); } } And now, back to the main question,I present LoopNodes, which must do all work for giving dlist nodes for splitting. void LoopNodes(UINT& od,UINT& og,std::vector<gOctreeNode*>& nlist,std::vector<gOctreeNode*>& dnodes) { //od++;//recursion depth bool f = false; //pass through childs for(UINT i = 0; i < 8; i++) { if(nlist[i]->isSplitted())//if node splitted and have childs { //pass forward through tree for(UINT j = 0; j < 8; j++) { nlist[j] = nlist[j]->m_ChildsVector[j];//set pointers to these childs } LoopNodes(od,og,nlist,dnodes); } else //if no childs { //add to split vector dnodes.push_back(nlist[i]); } } } This version of loop nodes works correctly for 2(or 1?) generations after - this will not divide neightbours nodes, only some corners. I need correct algorithm. Screenshot All I need - is correct version of LoopNodes, which can add all nodes for DivideNode.

    Read the article

  • Marshalling C# com items when using recursion

    - by Kevin
    I am using the SourceSafe COM object (SourceSafeTypeLib) from C# to automate a SourceSafe recursive get (part of a larger build process). The recursive function is shown below. How do I ensure that all the COM objects created in the foreach loop get released correctly? /// <summary> /// Recursively gets files/projects from SourceSafe (this is a recursive function). /// </summary> /// <param name="vssItem">The VSSItem to get</param> private void GetChangedFiles(VSSItem vssItem) { // 'If the object is a file perform the diff, // 'If not, it is a project, so use recursion to go through it if(vssItem.Type == (int)VSSItemType.VSSITEM_FILE) { bool bDifferent = false; //file is different bool bNew = false; //file is new //Surround the diff in a try-catch block. If a file is new(doesn't exist on //the local filesystem) an error will be thrown. Catch this error and record it //as a new file. try { bDifferent = vssItem.get_IsDifferent(vssItem.LocalSpec); } catch { //File doesn't exist bDifferent = true; bNew = true; } //If the File is different(or new), get it and log the message if(bDifferent) { if(bNew) { clsLog.WriteLine("Getting " + vssItem.Spec); } else { clsLog.WriteLine("Replacing " + vssItem.Spec); } string strGetPath = vssItem.LocalSpec; vssItem.Get(ref strGetPath, (int)VSSFlags.VSSFLAG_REPREPLACE); } } else //Item is a project, recurse through its sub items { foreach(VSSItem fileItem in vssItem.get_Items(false)) { GetChangedFiles(fileItem); } } }

    Read the article

  • Why do people hate SQL cursors so much?

    - by Steven A. Lowe
    I can understand wanting to avoid having to use a cursor due to the overhead and inconvenience, but it looks like there's some serious cursor-phobia-mania going on where people are going to great lengths to avoid having to use one for example, one question asked how to do something obviously trivial with a cursor and the accepted answer proposed using a common table expression (CTE) recursive query with a recursive custom function, even though this limits the number of rows that could be processed to 32 (due to recursive call limit in sql server). This strikes me as a terrible solution for system longevity, not to mention a tremendous effort just to avoid using a simple cursor. what is the reason for this level of insane hatred? has some 'noted authority' issued a fatwa against cursors? does some unspeakable evil lurk in the heart of cursors that corrupts the morals of the children or something? wiki question, more interested in the answer than the rep thanks in advance! Related Info: http://stackoverflow.com/questions/37029/sql-server-fast-forward-cursors EDIT: let me be more precise: I understand that cursors should not be used instead of normal relational operations, that is a no-brainer. What I don't understand is people going waaaaay out of their way to avoid cursors like they have cooties or something, even when a cursor is a simpler and/or more efficient solution. It's the irrational hatred that baffles me, not the obvious technical efficiencies.

    Read the article

  • Asymptotic runtime of list-to-tree function

    - by Deestan
    I have a merge function which takes time O(log n) to combine two trees into one, and a listToTree function which converts an initial list of elements to singleton trees and repeatedly calls merge on each successive pair of trees until only one tree remains. Function signatures and relevant implementations are as follows: merge :: Tree a -> Tree a -> Tree a --// O(log n) where n is size of input trees singleton :: a -> Tree a --// O(1) empty :: Tree a --// O(1) listToTree :: [a] -> Tree a --// Supposedly O(n) listToTree = listToTreeR . (map singleton) listToTreeR :: [Tree a] -> Tree a listToTreeR [] = empty listToTreeR (x:[]) = x listToTreeR xs = listToTreeR (mergePairs xs) mergePairs :: [Tree a] -> [Tree a] mergePairs [] = [] mergePairs (x:[]) = [x] mergePairs (x:y:xs) = merge x y : mergePairs xs This is a slightly simplified version of exercise 3.3 in Purely Functional Data Structures by Chris Okasaki. According to the exercise, I shall now show that listToTree takes O(n) time. Which I can't. :-( There are trivially ceil(log n) recursive calls to listToTreeR, meaning ceil(log n) calls to mergePairs. The running time of mergePairs is dependent on the length of the list, and the sizes of the trees. The length of the list is 2^h-1, and the sizes of the trees are log(n/(2^h)), where h=log n is the first recursive step, and h=1 is the last recursive step. Each call to mergePairs thus takes time (2^h-1) * log(n/(2^h)) I'm having trouble taking this analysis any further. Can anyone give me a hint in the right direction?

    Read the article

  • How can I create a "dynamic" WHERE clause?

    - by TheChange
    Hello there, First: Thanks! I finished my other project and the big surprise: now everything works as it should :-) Thanks to some helpful thinkers of SO! So here I go with the next project. I'd like to get something like this: SELECT * FROM tablename WHERE field1=content AND field2=content2 ... As you noticed this can be a very long where-clause. tablename is a static property which does not change. field1, field2 , ... (!) and the contents can change. So I need an option to build up a SQL statement in PL/SQL within a recursive function. I dont really know what to search for, so I ask here for links or even a word to search for.. Please dont start to argue about wether the recursive function is really needed or what its disadvanteges - this is not in question ;-) If you could help me to create something like an SQL-String which will later be able to do a successful SELECT this would be very nice! Iam able to go through the recursive function and make a longer string each time, but I cannot make an SQL statement from it.. Oh, one additional thing: I get the fields and contents by a xmlType (xmldom.domdocument etc) I can get the field and the content for example in a clob from the xmltype

    Read the article

  • Compile Apache 2.4.3 on Centos 6.2 (64bit)

    - by RiseCakoPlusplus
    I attempt to compile Apache 2.4.3 with apr-1.4.6 and apr-util-1.5.1 on Centos 6.2 (64bit). ./configure --build=x86_64-unknown-linux-gnu --host=x86_64-unknown-linux-gnu --target=x86_64-redhat-linux-gnu --program-prefix= --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/var/lib --mandir=/usr/share/man --infodir=/usr/share/info --cache-file=../config.cache --with-libdir=lib64 --with-config-file-path=/etc --with-config-file-scan-dir=/etc/php.d --disable-debug --with-pic --disable-rpath --without-pear --with-bz2 --with-exec-dir=/usr/bin --with-freetype-dir=/usr --with-png-dir=/usr --with-xpm-dir=/usr --enable-gd-native-ttf --with-t1lib=/usr --without-gdbm --with-gettext --with-gmp --with-iconv --with-jpeg-dir=/usr --with-openssl --with-zlib --with-layout=GNU --enable-exif --enable-ftp --enable-magic-quotes --enable-sockets --with-kerberos --enable-ucd-snmp-hack --enable-shmop --enable-calendar --with-libxml-dir=/usr --enable-xml --with-system-tzdata --with-mhash --with-apxs2=/usr/sbin/apxs --libdir=/usr/lib64/php --enable-pdo=shared --with-mysql=shared,/usr --with-mysqli=shared,/usr/lib64/mysql/mysql_config --with-pdo-mysql=shared,/usr/lib64/mysql/mysql_config --without-pdo-sqlite --without-gd --disable-dom --disable-dba --without-unixODBC --disable-xmlreader --disable-xmlwriter --without-sqlite3 --disable-phar --disable-fileinfo --disable-json --without-pspell --disable-wddx --without-curl --disable-posix --disable-sysvmsg --disable-sysvshm --disable-sysvsem ./configure --with-included-apr --with-included-apr-util and when I issue make this happen: /root/httpd-2.4.3/srclib/apr/libtool: line 5989: cd: yes/lib: No such file or directory libtool: link: cannot determine absolute directory name of yes/lib' make[3]: *** [libaprutil-1.la] Error 1 make[3]: Leaving directory/root/httpd-2.4.3/srclib/apr-util' make[2]: * [all-recursive] Error 1 make[2]: Leaving directory /root/httpd-2.4.3/srclib/apr-util' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory/root/httpd-2.4.3/srclib' make: * [all-recursive] Error 1 anything I missed?

    Read the article

  • Cannot get right height of text in java.awt.BufferdImage/Graphics2D

    - by Tommy
    Im creating a servlet that renders a jpg/png with a given text. I want the text to be centered on the rendered image. I can get the width, but the height i'm getting seems to be wrong Font myfont = new Font(Font.SANS_SERIF, Font.BOLD, 400); BufferedImage image = new BufferedImage(500, 500, BufferedImage.TYPE_INT_ARGB); Graphics2D g = image.createGraphics(); g.setFont(myfont); g.setColor(Color.BLACK); FontMetrics fm = g.getFontMetrics(); Integer textwidth = fm.stringWidth(imagetext); Integer textheight = fm.getHeight(); FontRenderContext fr = g.getFontRenderContext(); LineMetrics lm = myfont.getLineMetrics("5", fr ); float ascent = lm.getAscent(); float descent = lm.getDescent(); float height = lm.getHeight(); g.drawString("5", ((imagewidth - textwidth) / 2) , y?); g.dispose(); ImageIO.write(image, "png", outputstream); These are the values I get: textwidth = 222 textheight = 504 ascent = 402 descent = 87 height = 503 Anyone know how to get the exact height om the "5" ? The estimated height should be around 250

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >