Search Results

Search found 16838 results on 674 pages for 'writing patterns dita cms'.

Page 570/674 | < Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >

  • Easily (as in WYSIWYG) customize the docbook output

    - by Sukima
    I've used DocBook in the past and I love the idea behind the separation of content from presentation. I am very comfortable editing XML directly. In my extensive search to find the best documenting solution for my needs I am always coming back to this one solution: DocBook - Build system (ant, make, etc.) - Output I have seen lots of information concerning the best WYSIWYG, XML, Text editors for writing DocBook including alternative markup languages like asciidoc. All these solutions focus on the creation of DocBook or the nightmare of the DocBook tool chain. No one ever addresses the Output side other then to say "Just use XSL" or "Custom scripts" When tasked to make a document or manual I don't want to worry about spending countless hours attempting to reprogram, customize, and modify the XSL, CSS, and shell scripts (i.e. O'Riely books). That is a very arduous task. My query: is there a tool that makes the customizing easier? And is there anything that could be similar to say Pages or Word in that the user creates a template and the tool chain does the rest? Attempting to do a visual task like pretty logos and fixing all the broken layouts that the default XSL comes up with (pagination is a mess) is very difficult from a text editor. Content is easy. Editing DocBook XSL was truly a nightmare when I did it in the past. I've searched and I find lots of info on XML editors but nothing on XSL editors. Or am I lacking a key understanding of the process. Thanks.

    Read the article

  • Having trouble with confusing behaviour of error between debug and release modes in Xcode

    - by Cocorico
    Hi guys, I am confused over something (what is new!). I have an iPhone program I am writing and using some sqlite in a certain method, and there is some error which is giving me a message that says "Program received signal: “EXC_BAD_ACCESS” Okay, so I am trying to hunt down why this is doing this, and I notice something: When I run the program in debug mode, it gives me this error every single time I access this method (I test on the device). However when I run the program in release mode, I can access this method 2 times, and then it will give me this error the third time. So I mean, can someone just give me an explanation of what might cause this, I think that maybe deep-down I am not that smart on the difference in XCode of debug and release modes. I think that release mode does optimizing, and I guess the actual assembly machine code comes out different, yes? I AM A BIG NEWBIE USER UNFORTUNATELY! I am not clear on a lot of things like this, or whether it needs for I to remove nslog commands in the release build and such. Maybe I should just post the actual code in separate Stack OVerflow post, and see if people can see the error, then maybe this all become clear to me.

    Read the article

  • Please help with passing multidimensional arrays

    - by nn
    Hi, I'm writing a simple test program to pass multidimensional arrays. I've been struggling to get the signature of the callee function. void p(int (*s)[100], int n) { ... } In the code I have: int s1[10][100], s2[10][1000]; p(s1, 100); This code appears to work, but it's not what I intended. I want to the function p to be oblivious whether the range of values is 100 or 1000, but it should know there are 10 pointers. I tried as a first attempt: void p(int (*s)[10], int n) // n = # elements in the range of the array and also: void p(int **s, int n) // n = # of elements in the range of the array But to no avail can I seem to get this correct. I don't want to hardcode the 100 or 1000, but instead pass it in, but there will always be 10 arrays. Obviously, I want to avoid having to declare the function: void p(int *s1, int *s2, int *s3, ..., int *s10, int n) FYI, I'm looking at the answers to a similar question but still confused.

    Read the article

  • How do I fork correctly in a perl module for znc?

    - by rarbox
    I'm currently writing an IRC bot. The scripts are loaded as perl modules in ZNC but the bot gets disconnected with an Input/Output error if I create a forked process. This is a working example script without fork, but this causes the bot to freeze until the script finishes doing its task. package imdb; use warnings; use strict; sub new { my ($class) = @_; my $self = {}; bless( $self, $class ); return( $self ); } sub OnChanMsg { my ($self, $nick, $channel,$text) = @_; #unless (my $pid = fork()) { my $result = a_slow_process($text); ZNC::PutIRC( "PRIVMSG $channel :$result" ); # exit; #} return( ZNC::CONTINUE ); } sub OnShutdown { my ( $me ) = @_; } sub a_slow_process { my $input = shift; sleep 10; return "You said $input."; } 1; The fork code that is causing the error is commented out. How do I fix this? Edited to add: I was told that ZNC::PutIRC should not be put in the child process.

    Read the article

  • Downloading javascript Without Blocking

    - by doug
    The context: My question relates to improving web-page loading performance, and in particular the effect that javascript has on page-loading (resources/elements below the script are blocked from downloading/rendering). This problem is usually avoided/mitigated by placing the scripts at the bottom (eg, just before the tag). The code i am looking at is for web analytics. Placing it at the bottom reduces its accuracy; and because this script has no effect on the page's content, ie, it's not re-writing any part of the page--i want to move it inside the head. Just how to do that without ruining page-loading performance is the crux. From my research, i've found six techniques (w/ support among all or most of the major browsers) for downloading scripts so that they don't block down-page content from loading/rendering: (i) XHR + eval(); (ii) XHR + 'inject'; (iii) download the HTML-wrapped script as in iFrame; (iv) setting the script tag's 'async' flag to 'TRUE' (HTML 5 only); (v) setting the script tag's 'defer' attribute; and (vi) 'Script DOM Element'. It's the last of these i don't understand. The javascript to implement the pattern (vi) is: (function() { var q1 = document.createElement('script'); q1.src = 'http://www.my_site.com/q1.js' document.documentElement.firstChild.appendChild(q1) })(); Seems simple enough: inside this anonymous function, a script element is created, its 'src' element is set to it's location, then the script element is added to the DOM. But while each line is clear, it's still not clear to me how exactly this pattern allows script loading without blocking down-page elements/resources from rendering/loading?

    Read the article

  • Cannot access Class methods from previous windows form - C#

    - by George
    I am writing an app, still, where I need to test some devices every minute for 30 minutes. It made sense to use a timer set to kick off every 60 secs and do whats required in the event handler. However, I need the app to wait for the 30 mins until I have finished with the timer since the following code alters the state of the devices I am trying to monitor. I obviously don't want to use any form of loop to do this. I thought of using another windows form, since I also display the progress, which will simply kick off the timer and wait until its complete. The problem I am having with this is that I use a device Class and cant seem to get access to the methods in the device class from the 2nd (3rd actually - see below) windows form. I have an initial windows form where I get input from the user, then call the 2nd windows form where it work out which tests need to be done and which device classes need to be used, and then I want to call the 3rd windows form to handle the timer. I will have up to 6-7 device classes and so wanted to only instantiate them when actually requiring them, from the 2nd form. Should I have put this logic into the 1st windows form (program class ??) ? Would I not still have the problem of not being able to access device class methods from there too ? Anyway, perhaps someone knows of a better way to do the checks every minute without the rest of the code executing (and changing the status of the devices) or how I should be accessing the methods in the app ?? Well that's the problem, I cant get that part of it to work correctly. Here is the definition for the calling form including the device class - namespace NdtStart { public partial class fclsNDTCalib : Form { NDTClass NDT = new NDTClass(); public fclsNDTCalib() (new fclsNDTTicker(NDT)).ShowDialog(); Here is the class def for the called form - namespace NdtStart { public partial class fclsNDTTicker : Form { public fclsNDTTicker() I tried lots but couldn't get the arguments to work.

    Read the article

  • Providing lat/long AND title in iOS Maps URL seems to cause zoom level to be ignored

    - by Ian Howson
    I'm writing an iOS app that shows a location in Maps upon a user action. I'd like to drop a pin with a description and zoom in to show map detail. If I invoke Maps with the url http://maps.google.com/maps?q=-33.895851,151.18483+(Some+Description)&z=19 I get a pin with 'Some Description', but the zoom level is ignored. This does work on the Google Maps website. If I use http://maps.google.com/maps?q=-33.895851,151.18483&z=19 the zoom works, but I get no pin. I've tried a few combinations of ?q=, ?ll= and ?sll=, but so far, nothing will change zoom and show a description. Any clues? Just so we're really clear, here are some screenshots. I want this to work on a real device (i.e. with iOS Maps). The simulator uses Google Maps through Safari. Here's what I see with URL 1 ([[UIApplication sharedApplication] openURL:[NSURL URLWithString: @"http://maps.google.com/maps?q=-33.895851,151.18483+(Some+Description)&z=19"]];) This is URL 2 ([[UIApplication sharedApplication] openURL:[NSURL URLWithString: @"http://maps.google.com/maps?q=-33.895851,151.18483&z=19"]];): This is relikd's suggestion ([[UIApplication sharedApplication] openURL:[NSURL URLWithString: @"http://maps.google.com/maps?z=19&q=-33.895851,151.18483+(Some+Description)"]];): I want the image I see in screenshot 2, but with a pin and a description.

    Read the article

  • Using StructureMap to create classes by a name?

    - by Bevan
    How can I use StructureMap to resolve to an appropriate implementation of an interface based on a name stored in an attribute? In my project, I have many different kinds of widgets, each descending from IWidget, and each decorated with an attribute specifying the kind of associated element. To illustrate: [Configuration("header")] public class HeaderWidget : IWidget { } [Configuration("linegraph")] public class LineGraphWidget : IWidget { } When processing my (XML) configuration file, I want to obtain an instance of the appropriate concrete class based on the name of the element I'm processing. public IWidget CreateWidget(XElement definition) { var kind = definition.Name.LocalName; var widget = // What goes here? widget.Configure(definition); return widget; } Each definition should result in a different widget being created - I don't need or want the instances to be shared. In the past I've written plenty of code to do this kind of thing manually, including writing a custom "roll-your-own" IoC container for one project. However, one of my goals with this project is to become proficient with StructureMap instead of reinventing the wheel. I think I've already managed to set up automatic scanning of assemblies so that StructureMap knows about all my IWidget implementations: public class WidgetRegistration : Registry { public WidgetRegistration() { Scan( scanner => { scanner.AssembliesFromApplicationBaseDirectory(); scanner.AddAllTypesOf<IWidget>(); }); } } However, this isn't registering the names of my widgets with StructureMap. What do I need to add to make my scenario work? (While I am trying to use StructureMap in this project, an answer showing me how to solve this problem with a different DI/IoC tool would still be valuable.)

    Read the article

  • Winsock WSAAsyncSelect sending without an infinite buffer

    - by Xexr
    Hi, This is more of a design question than a specific code question, I'm sure I am missing the obvious, I just need another set of eyes. I am writing a multi-client server based on WSAAsyncSelect, each connection is made into an object of a connection class I have written which contains associated settings and buffers etc. My question concerns FD_WRITE, I understand how it operates: One FD_WRITE is sent immediately after a connection is established. Thereafter, you should send until WSAEWOULDBLOCK is received at which point you store what is left to send in a buffer, and wait to be told that it is ok to send again. This is where I have a problem, how large do I make this holding buffer within each connections object? The amount of time until a new FD_WRITE is received is unknown, I could be attempting to send a lot of stuff during this period, all the time adding to my outgoing buffer. If I make the buffer dynamic, memory usage could spiral out of control if for whatever reason, I am unable to send() and reduce the buffer. So my question is how do you generally handle this situation? Note I am not talking about the network buffer itself which winsock uses, but one of my own creation used to "queue" up sends. Hope I explained that well enough, thanks all!

    Read the article

  • php import to mysql hosted on godaddy

    - by julio
    Yeah, I know! It's not my choice. I am doing a large data import using a PHP script into a mysql DB hosted on godaddy. It seems their mysql connection gets killed every few hours regardless of what work it's doing. Their tech support is useless, and I've exhausted myself writing attempted workarounds. Right now, I'm trying to do a mysql_ping every few minutes, and if the ping returns false, I attempt to open up a new db connection. My script (which takes many hours to complete), keeps failing with the very unhelpful message of "mysql server has gone away". I understand mysql trying to close a connection that's been open too long, but the connection is not idle-- it's busy basically the whole time, and with the pings I've written in, it should not be idle longer than 5 minutes at most at any time. (These same scripts work with no errors on Amazon AWS servers, my local servers, etc.) Any help most appreciated! I'm about to give up.

    Read the article

  • What is the cleanest way to use anonymous functions?

    - by Fletcher Moore
    I've started to use Javascript a lot more, and as a result I am writing things complex enough that organization is becoming a concern. However, this question applies to any language that allows you to nest functions. Essentially, when should you use an anonymous function over a named global or inner function? At first I thought it was the coolest feature ever, but I think I am going overboard. Here's an example I wrote recently, ommiting all the variable delcarations and conditionals so that you can see the structure. function printStream() { return fold(function (elem, acc) { ... var comments = (function () { return fold(function (comment, out) { ... return out + ...; }, '', elem.comments); return acc + ... + comments; }, '', data.stream); } I realized though (I think) there's some kind of beauty in being so compact, it is probably isn't a good idea to do this in the same way you wouldn't want a ton of code in a double for loop.

    Read the article

  • Why Android for enterprise applications?

    - by mcabral
    Recently one of our clients is considering the posibility of picking up an old WinMobile 5.0 project. Several features are to be added to the point it will be a major version update. The client is worried about the mobile market, and thinks there's a chance all the effort put in this development will have to be thrown away in a couple of year due to the dinamics of the mobile market and the deprecation of mobile devices. So, the client is not sure whether he should continue with Windows Mobile (changing from WM 5.0 to 6.X) or starting from scratch with another technology. From our part we have been studing the mobile market, looking for clues for which will be the winning horse. The safe move seems to continue with WM just because re writing an entire application from scratch involves more risks and delays. On the other hand WM seems to be losing market and the ghost of an exit on their part is growing stronger everyday. But what can be say about Android? Everyone is talking about it and is growing at full speed but what avantagies will it bring to the table? Why should we start a fresh applicaction on this technology? So the question remains the same.. is Andriod mature enough for an enterprise application? Will you recomend it to one of your clients? Will you port/rewrite a WM application to Andriod? What's the trade-off? EDIT: Addressing commentaries. The app is entirely built with C# and Compact Framework. The app is for logistics/management.

    Read the article

  • autotools: no rule to make target all

    - by Raffo
    I'm trying to port an application I'm developing to autotools. I'm not an expert in writing makefiles and it's a requisite for me to be able to use autotools. In particular, the structure of the project is the following: .. ../src/Main.cpp ../src/foo/ ../src/foo/x.cpp ../src/foo/y.cpp ../src/foo/A/k.cpp ../src/foo/A/Makefile.am ../src/foo/Makefile.am ../src/bar/ ../src/bar/z.cpp ../src/bar/w.cpp ../src/bar/Makefile.am ../inc/foo/ ../inc/bar/ ../inc/foo/A ../configure.in ../Makefile.am The root folder of the project contains a "src" folder containing the main of the program AND a number of subfolders containing the other sources of the program. The root of the project also contains an "inc" folder containing the .h files that are nothing more than the definitions of the classes in "src", thus "inc" reflects the structure of "src". I have written the following configure.in in the root: AC_INIT([PNAME], [1.0]) AC_CONFIG_SRCDIR([src/Main.cpp]) AC_CONFIG_HEADER([config.h]) AC_PROG_CXX AC_PROG_CC AC_PROG_LIBTOOL AM_INIT_AUTOMAKE([foreign]) AC_CONFIG_FILES([Makefile src/Makefile src/foo/Makefile src/foo/A/Makefile src/bar/Makefile]) AC_OUTPUT And the following is ../Makefile.am SUBDIRS = src and then in ../src where the main of the project is contained: bin_PROGRAMS = pname gsi_SOURCES = Main.cpp AM_CPPFLAGS = -I../../inc/foo\ -I../../inc/foo/A \ -I../../inc/bar/ pname_LDADD= foo/libfoo.a bar/libbar.a SUBDIRS = foo bar and in ../src/foo noinst_LIBRARIES = libfoo.a libfoo_a_SOURCES = \ x.cpp \ y.cpp AM_CPPFLAGS = \ -I../../inc/foo \ -I../../inc/foo/A \ -I../../inc/bar And the analogous in src/bar. The problem is that after calling automake and autoconf, when calling "make" the compilation fails. In particular, the program enters the directory src, then foo and creates libfoo.a, but the same fail for libbar.a, with the following error: Making all in bar make[3]: Entering directory `/user/Raffo/project/src/bar' make[3]: *** No rule to make target `all'. Stop. I have read the autotools documentation, but I'm not able to find a similar example to the one I am working on. Unfortunately I can't change the directory structure as this is a fixed requisite of the project I'm working on. I don't know if you can help me or give me any hint, but maybe you can guess the error or give me a link to a similar structured example. Thank you.

    Read the article

  • C++ Switch won't compile with externally defined variable used as case

    - by C Nielsen
    I'm writing C++ using the MinGW GNU compiler and the problem occurs when I try to use an externally defined integer variable as a case in a switch statement. I get the following compiler error: "case label does not reduce to an integer constant". Because I've defined the integer variable as extern I believe that it should compile, does anyone know what the problem may be? Below is an example: test.cpp #include <iostream> #include "x_def.h" int main() { std::cout << "Main Entered" << std::endl; switch(0) { case test_int: std::cout << "Case X" << std::endl; break; default: std::cout << "Case Default" << std::endl; break; } return 0; } x_def.h extern const int test_int; x_def.cpp const int test_int = 0; This code will compile correctly on Visual C++ 2008. Furthermore a Montanan friend of mine checked the ISO C++ standard and it appears that any const-integer expression should work. Is this possibly a compiler bug or have I missed something obvious? Here's my compiler version information: Reading specs from C:/MinGW/bin/../lib/gcc/mingw32/3.4.5/specs Configured with: ../gcc-3.4.5-20060117-3/configure --with-gcc --with-gnu-ld --with-gnu-as --host=mingw32 --target=mingw32 --prefix=/mingw --enable-threads --disable-nls --enable-languages=c,c++,f77,ada,objc,java --disable-win32-registry --disable-shared --enable-sjlj-exceptions --enable-libgcj --disable-java-awt --without-x --enable-java-gc=boehm --disable-libgcj-debug --enable-interpreter --enable-hash-synchronization --enable-libstdcxx-debug Thread model: win32 gcc version 3.4.5 (mingw-vista special r3)

    Read the article

  • qtruby, QUiLoader and respond_to?

    - by Tim Sylvester
    I'm writing a simple Qt4 application in Ruby (using qtruby) to teach myself both. Mostly it has gone well, but in trying to use Ruby's "duck typing" I've run into a snag; respond_to? doesn't seem to reflect reality. irb(main):001:0> require 'rubygems' => true irb(main):002:0> require 'Qt4' => true irb(main):003:0> require 'qtuitools' => true irb(main):004:0> Qt::Application.new(ARGV) => #<Qt::Application:0xc3c9a08 objectName="ruby"> irb(main):005:0> file = Qt::File.new("dlg.ui") { open(Qt::File::ReadOnly) } => #<Qt::File:0xc2e1748 objectName=""> irb(main):006:0> obj = Qt::UiLoader.new().load(file, nil) => #<Qt::Dialog:0xc2bf650 objectName="dlg", x=0, y=0, width=283, height=244> irb(main):007:0> obj.respond_to?('children') => false irb(main):008:0> obj.respond_to?(:children) => false irb(main):009:0> obj.children => [#<Qt::FormInternal::TranslationWatcher:0xc2a1980 objectName="">, ... As you can see, when I check to ensure that the object I get back from loading the UI file has a children accessor I get false. If call that accessor, however, I get an array rather than a NoMethodError. So, is this a bug or have I incorrectly understood respond_to?? This looks like the problem described here, but I thought I would get an expert opinion before filing a bug against the project.

    Read the article

  • How to export Oracle statistics

    - by A_M
    Hi, I am writing some new SQL queries and want to check the query plans that the Oracle query optimiser would come up with in production. My development database doesn't have anything like the data volumes of the production database. How can I export database statistics from a production database and re-import them into a development database? I don't have access to the production database, so I can't simply generate explain plans on production without going through a third party hosting organisation. This is painful. So I want a local database which is in some way representative of production on which I can try out different things. Also, this is for a legacy application. I'd like to "improve" the schema, by adding appropriate indexes. constraints, etc. I need to do this in my development database first, before rolling out to test and production. If I add an index and re-generate statistics in development, then the statistics will be generated around the development data volumes, which makes it difficult to assess the impact my changes on production. Does anyone have any tips on how to deal with this? Or is it just a case of fixing unexpected behaviour once we've discovered it on production? I do have a staging database with production volumes, but again I have to go through a third party to run queries against this, which is painful. So I'm looking for ways to cut out the middle man as much as possible. All this is using Oracle 9i. Thanks.

    Read the article

  • Using fields from an association (has_many) model with formtastic in rails

    - by pduersteler
    I searched and tried a lot, but I can't accomplish it as I want.. so here's my problem. class Moving < ActiveRecord::Base has_many :movingresources, :dependent => :destroy has_many :resources, :through => :movingresources end class Movingresource < ActiveRecord::Base belongs_to :moving belongs_to :resource end class Resource < ActiveRecord::Base has_many :movingresources has_many :movings, :through => :movingresources end Movingresources contains additional fields, like "quantity". We're working on the views for 'bill'. Thanks to formtastic to simplify the whole relationship thing by just writing <%= form.input :workers, :as => :check_boxes %> and i get a real nice checkbox list. But what I haven't found out so far is: How can i use the additional fields from 'movingresource', next or under each checkbox my desired fields from that model? I saw different approaches, mainly with manually looping through an array of objects and creating the appropriate forms, using :for in a form.inputs part, or not. But none of those solutions were clean (e.g. worked for the edit view but not for new because the required objects were not built or generated and generating them caused a mess). I want to know your solutions for this! :-)

    Read the article

  • Boost Date_Time problem compiling a simple program

    - by Andry
    Hello! I'm writing a very stupid program using Boost Date_Time library. int main(int srgc, char** argv) { using namespace boost::posix_time; date d(2002,Feb,1); //an arbitrary date ptime t1(d, hours(5)+nanosec(100)); //date + time of day offset ptime t2 = t1 - minutes(4)+seconds(2); ptime now = second_clock::local_time(); //use the clock date today = now.date(); //Get the date part out of the time } Well I cannot compile it, compiler does not recognize a type... Well I used many features of Boost libs like serialization and more... I correctly built them and, looking in my /usr/local/lib folder I can see that libboost_date_time.so is there (a good sign which means I was able to build that library) When I compile I write the following: g++ -lboost_date_time main.cpp But the errors it showed me when I specify the lib are the same of those ones where I do not specify any lib. What is this? Anyone knows? The error is main.cpp: In function ‘int main(int, char**)’: main.cpp:9: error: ‘date’ was not declared in this scope main.cpp:9: error: expected ‘;’ before ‘d’ main.cpp:10: error: ‘d’ was not declared in this scope main.cpp:10: error: ‘nanosec’ was not declared in this scope main.cpp:13: error: expected ‘;’ before ‘today’

    Read the article

  • C++ Exception Handling

    - by user1413793
    So I was writing some code and I noticed that apart from syntactical, type, and other compile-time errors, C++ does not throw any other exceptions. So I decided to test this out with a very trivial program: #include<iostream> int main() { std::count<<5/0<<std::endl; return 1 } When I compiled it using g++, g++ gave me a warning saying I was dividing by 0. But it still compiled the code. Then when I ran it, it printed some really large arbitrary number. When I want to know is, how does C++ deal with exceptions? Integer division by 0 should be a very trivial example of when an exception should be thrown and the program should terminate. Do I have to essentially enclose my entire program in a huge try block and then catch certain exceptions? I know in Python when an exception is thrown, the program will immediately terminate and print out the error. What does C++ do? Are there even runtime exceptions which stop execution and kill the program?

    Read the article

  • Deeply nested subqueries for traversing trees in MySQL

    - by nickf
    I have a table in my database where I store a tree structure using the hybrid Nested Set (MPTT) model (the one which has lft and rght values) and the Adjacency List model (storing parent_id on each node). my_table (id, parent_id, lft, rght, alias) This question doesn't relate to any of the MPTT aspects of the tree but I thought I'd leave it in in case anyone had a good idea about how to leverage that. I want to convert a path of aliases to a specific node. For example: "users.admins.nickf" would find the node with alias "nickf" which is a child of one with alias "admins" which is a child of "users" which is at the root. There is a unique index on (parent_id, alias). I started out by writing the function so it would split the path to its parts, then query the database one by one: SELECT `id` FROM `my_table` WHERE `parent_id` IS NULL AND `alias` = 'users';-- 1 SELECT `id` FROM `my_table` WHERE `parent_id` = 1 AND `alias` = 'admins'; -- 8 SELECT `id` FROM `my_table` WHERE `parent_id` = 8 AND `alias` = 'nickf'; -- 37 But then I realised I could do it with a single query, using a variable amount of nesting: SELECT `id` FROM `my_table` WHERE `parent_id` = ( SELECT `id` FROM `my_table` WHERE `parent_id` = ( SELECT `id` FROM `my_table` WHERE `parent_id` IS NULL AND `alias` = 'users' ) AND `alias` = 'admins' ) AND `alias` = 'nickf'; Since the number of sub-queries is dependent on the number of steps in the path, am I going to run into issues with having too many subqueries? (If there even is such a thing) Are there any better/smarter ways to perform this query?

    Read the article

  • Why is my code slower using #import "progid:typelib" than using "MFC Class From TypeLib"?

    - by Pakman
    I am writing an automation client in Visual C++ with MFC. If I right-click on my solution » Add » Class, I have the option to select MFC Class From TypeLib. Selecting this option generates source/header files for all interfaces. This allows me to write code such as: #include "CApplication.h" #include "CDocument.h" // ... connect to automation server ... CApplication *myApp = new CApplication(pDisp); CDocument myDoc = myApp->get_ActiveDocument(); Using this method, my benchmarking function that makes about 12000 automation calls takes 1 second. Meanwhile, the following code: #import "progid:Library.Application" Library::IApplicationPtr myApp; // ... connect to automation server ... Library::IDocumentPtr myDoc = myApp->GetActiveDocument(); takes about 2.4 seconds for the same benchmark. I assume the smart-pointer implementation is slowing me down, but I don't know why. Even worse, I'm not sure how to use #import construct to achieve the speeds that the first method yields. Is this possible? How or why not? Thanks for your time!

    Read the article

  • what does this attempted trojan horse code do?

    - by bstullkid
    It looks like this just sends a ping, but whats the point of that when you can just use ping? /* WARNING: this is someone's attempt at writing a malware trojan. Do not compile and *definitely* don't install. I added an exit as the first line to avoid mishaps - msw */ int main (int argc, char *argv[]) { exit(1); unsigned int pid = 0; char buffer[2]; char *args[] = { "/bin/ping", "-c", "5", NULL, NULL }; if (argc != 2) return 0; args[3] = strdup(argv[1]); for (;;) { gets(buffer); /* FTW */ if (buffer[0] == 0x6e) break; switch (pid = fork()) { case -1: printf("Error Forking\n"); exit(255); case 0: execvp(args[0], args); exit(1); default: break; } } return 255; }

    Read the article

  • Alternative way of implementation of a code

    - by vikramjb
    The title is not exactly meaningful, but I am not share what else to name it. I wrote a TOC Generation code sometime later. Based on this I was writing code to check for duplicates also. The code is as follows curNumber = getTOCReference(selItem.SNo, IsParent); CheckForDuplicates(curNumber, IsParent,out realTOCRef); curNumber = realTOCRef; And the code for CheckForDuplicates is ListViewItem curItem = this.tlvItems.FindItemWithText(curNumber); if (curItem != null) { curNumber = this.getTOCReference(curNumber, !IsParent); CheckForDuplicates(curNumber, IsParent,out realTOCRef); } else { realTOCRef= curNumber; } What this code does, it gets a TOC and tries it find if it already exists in the ObjectListView and gets a new TOC if there is a existing TOC. Once it determines that the generated TOC is not there in the list it stores it in realTOCRef and sends it back to the main calling code. I have used "out" to return the last generated TOC, which is something I did not want to do. The reason why I did it was after the non duplicate TOC was generated the return was not going back to the calling code instead it looped through the previous instances then it returned back. When the looping back happened the TOC that was to be returned also got reset. I would appreciate any help on this.

    Read the article

  • How to create platform independent 3D video on 3D TV via HDMI 1.4?

    - by artif
    I am writing a real-time, interactive 3D visualization program and at each point in the program, I can compute 2 images (bitmaps) that are meant to look 3D together by means of stereoscopy. How do I get my program to display the image pairs such that they look 3D on a 3D TV? Is there a platform independent way of accomplishing it? (By platform I mean independent of GPU brand, operating system, 3D TV vendor, etc.) If not, which is preferable-- to lock in by GPU, OS, or 3D TV? I suppose I need to be using an HDMI 1.4 cable with the 3D TV? HDMI 1.4 can encode stereoscopy via side-by-side method. But how do I send such an encoded signal to the monitor? What kind of libraries do I use for this sort of thing? Windows DirectShow? If DirectShow is correct, is there a cross platform equivalent available? If anyone asks, yes I have seen this question: http://stackoverflow.com/questions/2811350/generating-3d-tv-stereoscopic-output-programmatically. However, correct me if I am wrong, it does not appear to be what I'm looking for. I do not have an OpenGL or Direct3D program that generates polygons, for which a Nvidia card can do ad-hoc impromptu stereoscopy simply by rendering the scene from 2 slightly offset points of view and then displaying those 2 images on the monitor-- my program already has those image pairs and needs to display them (and they are not the result of rendering polygons). Btw, I have never done any major multimedia programming before and know very little about HDMI, Direct Show, 3D TVs, etc so pardon me if any parts of this question did not make any sense at all.

    Read the article

  • How to embed XBase expressions in an Xtext DSL

    - by Marcus Mathioudakis
    I am writing a simple little DSL for specifying constraints on messages, and Have been trying without success for a while to embed XBase expressions into the language. The Grammar looks like this: grammar org.xtext.businessrules.BusinessRules with org.eclipse.xtext.xbase.Xbase //import "http://www.eclipse.org/xtext/xbase/Xbase" as xbase import "http://www.eclipse.org/xtext/common/JavaVMTypes" as jvmTypes generate businessRules "http://www.xtext.org/businessrules/BusinessRules" Start: rules+=Constraint*; Constraint: {Constraint} 'FOR' 'PAYLOAD' payload=PAYLOAD 'ELEMENT' element=ID 'CONSTRAINED BY' constraint=XExpression; PAYLOAD: "SimulationSessionEvents" |"stacons" |"any" ; Range: 'above' min=INT ('below' max=INT)? |'below' max=INT ('above' min=INT)? ; When trying to parse a file such as: FOR PAYLOAD SimulationSessionEvents ELEMENT matrix CONSTRAINED BY ... I can't get it to work for ... = any kind of Arithmetic expression, although it works for ...= loop or if expression, or even just a number. As soon as I do something like '-5' or '4-5' it says Couldn't resolve reference to JvmIdentifiableElement '-', even though the Xbase.xtext Grammar looks like it allows these expressions. I don't think I'm missing any Jars, as it doesn't complain when I run the mwe workflow, but only when trying to parse the input file. Any help would be much appreciated.

    Read the article

< Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >