Search Results

Search found 14956 results on 599 pages for 'undefined index'.

Page 185/599 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • SSI: Failed String Comparison with CGI Environment Variable [migrated]

    - by Calyo Delphi
    I am currently working on developing a personal website. It's not my first time doing this, but this is my first major foray into implementing SSI. I've run myself into a wall, however, with an if-else directive that uses one of the CGI environment variables as part of its comparison. Even after some limited attempts at debugging, all of the output and documentation that I have means that the comparisons being made should fail outright. This is not the case, and the wrong evaluation is being made by the if-else directive. Here's the code in the file index.shtml: <head> <!--#set var="page" value="Home" --> <!--#include file="headlinks.shtml" --> <style> img#ref { float: right; margin-left: 8px; border-width: 0px; } </style> </head> Here's the code in the file headlinks.shtml: <title><!--#echo var="page" --> &ndash; <!--#echo var="HTTP_HOST" --></title> <!--#set var="docroot" value="${DOCUMENT_ROOT}" --> <!--#echo var="docroot" --> <!--#if expr="( $docroot != '/Applications/MAMP/htdocs' ) || ( $docroot != '/home/dragarch/public_html' )" --> <link rel="stylesheet" type="text/css" href="../style.css"> <link rel="shortcut icon" type="image/svg+xml" href="../favicon.svg" /> <!--#else --> <link rel="stylesheet" type="text/css" href="style.css"> <link rel="shortcut icon" type="image/svg+xml" href="favicon.svg" /> <!--#endif --> And here's the output for the file index.shtml: <title>Home &ndash; dragarch</title> /Applications/MAMP/htdocs <link rel="stylesheet" type="text/css" href="../style.css"> <link rel="shortcut icon" type="image/svg+xml" href="../favicon.svg" /> Both style.css and favicon.svg are in the document root with index.shtml, so the if directive should fail and default to the output of the else directive. As you can see, while the document root (which is currently the MAMP htdocs folder on my own notebook) is correct according to the output of the echo directive, the comparison in the if-else directive fails to compare the strings properly. I'm using this page for my documentation: http://httpd.apache.org/docs/2.2/mod/mod_include.html I'm at a complete loss as to why this is the case, and need a bit of help here. EDIT: I should note that dragarch is a hostname that I configured in /etc/hosts to point to 127.0.0.1 so I could test the site without having to use localhost. It has no real effect on the functionality of anything, other than to just act as a prettier hostname to use.

    Read the article

  • Changes made to cli's php.ini not taking effect

    - by Sandeepan Nath
    I have two php.ini files - /etc/php.ini which loads in case of cli /opt/lampp/etc/php.ini which loads in case of browser. I am able to use PHP's Mailparse extension after adding the line extension=mailparse.so in the /opt/lampp/etc/php.ini and restarting lampp. But I am not able to load the same in case of command line - getting PHP Fatal error: Call to undefined function mailparse_msg_create() in ... mailparse_msg_create() is a part of the Mailparse extension. I tried by relogging with the user after making the change and even restarting the system. What needs to be done so that the change takes effect. Update I checked that php -i | grep 'Configuration File' gives PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/mailparse.so' - /usr/lib/php/modules/mailparse.so: cannot open shared object file: No such file or directory in Unknown on line 0 Configuration File (php.ini) Path => /etc Loaded Configuration File => /etc/php.ini Update 2 I copied the mailparse.so from /opt/lampp/lib/php/extensions/no-debug-non-zts-20090626 and put it in /usr/lib/php/modules. I added extension=mailparse.so to /etc/php.ini as well. But it still showed this warning PHP Warning: PHP Startup: Unable to load dynamic library ... As told by Lekensteyn, I did ldd /usr/lib/php/modules/mailparse.so and got ldd: warning: you do not have execution permission for /usr/lib/php/modules/mailparse.so' So I gave execute permission. Then ldd /usr/lib/php/modules/mailparse.so showed linux-gate.so.1 => (0x00110000) libc.so.6 => /lib/libc.so.6 (0x0011d000) /lib/ld-linux.so.2 (0x003aa000) which looks normal. BUt now, running php command says PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/mailparse.so' - /usr/lib/php/modules/mailparse.so: undefined symbol: mbfl_name2no_encoding in Unknown on line 0

    Read the article

  • Uninstall php5 installed from source.

    - by diegomichel
    I have tried to install php5 from source , and it worked... Then for some reason need to install the official packets, so i tried a make uninstall and for my surprise there is such make uninstall... so i tried delete all the installed files by hand. Then installed the official debian packages and it worked fine... till i need install sqlite module, which give me the following error: php --version PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626/pdo_sqlite.so' - /usr/lib/php5/20090626/pdo_sqlite.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626/sqlite.so' - /usr/lib/php5/20090626/sqlite.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 PHP 5.3.1-5 with Suhosin-Patch (cli) (built: Feb 22 2010 22:46:05) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2009 Zend Technologies So i remember that manual install i did, and i think there is some old lib installed causing that problem, the bad thing is that there is not such make uninstall on the source code of php5... php-5.2.13 > make uninstall make: *** No rule to make target `uninstall'. Stop. I have tried reinstall and purge all php related packages via aptitude with not success. OS: Debian Squeeze. uname -a Linux desktop 2.6.32-trunk-amd64 #1 SMP Sun Jan 10 22:40:40 UTC 2010 x86_64 GNU/Linux Any idea how to fix that?

    Read the article

  • Ubuntu 10.04: Unable to Start RabbitMQ Server Post-Installation

    - by Garland W. Binns
    After installing RabbitMQ on Ubuntu 10.04 I receive a failure message that the service was unable to start. Any insight into the issue would be greatly appreciated! Below are contents of startup_log and startup_err. Startup_log: {error_logger,{{2012,7,7},{15,50,31}},"Protocol: ~p: register error: ~p~n",["inet_tcp",{{badmatch,{error,etimedout}},[{inet_tcp_dist,listen,1},{net_kernel,start_protos,4},{net_kernel,start_protos,3},{net_kernel,init_node,2},{net_kernel,init,1},{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}]} {error_logger,{{2012,7,7},{15,50,31}},crash_report,[[{initial_call,{net_kernel,init,['Argument__1']}},{pid,<0.20.0>},{registered_name,[]},{error_info,{exit,{error,badarg},[{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}},{ancestors,[net_sup,kernel_sup,<0.9.0>]},{messages,[]},{links,[#Port<0.100>,<0.17.0>]},{dictionary,[{longnames,false}]},{trap_exit,true},{status,running},{heap_size,987},{stack_size,24},{reductions,512}],[]]} {error_logger,{{2012,7,7},{15,50,31}},supervisor_report,[{supervisor,{local,net_sup}},{errorContext,start_error},{reason,{'EXIT',nodistribution}},{offender,[{pid,undefined},{name,net_kernel},{mfa,{net_kernel,start_link,[[rabbitmqprelaunch877,shortnames]]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]}]} {error_logger,{{2012,7,7},{15,50,31}},supervisor_report,[{supervisor,{local,kernel_sup}},{errorContext,start_error},{reason,shutdown},{offender,[{pid,undefined},{name,net_sup},{mfa,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]}]} {error_logger,{{2012,7,7},{15,50,31}},std_info,[{application,kernel},{exited,{shutdown,{kernel,start,[normal,[]]}}},{type,permanent}]} {"Kernel pid terminated",application_controller,"{application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}"} Startup_err: Crash dump was written to: erl_crash.dump Kernel pid terminated (application_controller) ({application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}})

    Read the article

  • libstdc++ - compiling failing because of tr1/regex

    - by Radek Šimko
    I have these packages installed on my OpenSUSE 11.3: i | libstdc++45 | Standard shared library for C++ | package i | libstdc++45-devel | Contains files and libraries for development | package But when i'm trying to compile this C++ code: #include <stdio.h> #include <tr1/regex> using namespace std; int main() { int test[2]; const tr1::regex pattern(".*"); test[0] = 1; if (tr1::regex_match("anything", pattern) == false) { printf("Pattern does not match.\n"); } return 0; } using g++ -pedantic -g -O1 -o ./main.o ./main.cpp It outputs this errors: ./main.cpp: In function ‘int main()’: ./main.cpp:13:43: error: ‘printf’ was not declared in this scope radek@mypc:~> nano main.cpp radek@mypc:~> g++ -pedantic -g -O1 -o ./main.o ./main.cpp /tmp/cc0g3GUE.o: In function `basic_regex': /usr/include/c++/4.5/tr1_impl/regex:771: undefined reference to `std::tr1::basic_regex<char, std::tr1::regex_traits<char> >::_M_compile()' /tmp/cc0g3GUE.o: In function `bool std::tr1::regex_match<char const*, char, std::tr1::regex_traits<char> >(char const*, char const*, std::tr1::basic_regex<char, std::tr1::regex_traits<char> > const&, std::bitset<11u>)': /usr/include/c++/4.5/tr1_impl/regex:2144: undefined reference to `bool std::tr1::regex_match<char const*, std::allocator<std::tr1::sub_match<char const*> >, char, std::tr1::regex_traits<char> >(char const*, char const*, std::tr1::match_results<char const*, std::allocator<std::tr1::sub_match<char const*> > >&, std::tr1::basic_regex<char, std::tr1::regex_traits<char> > const&, std::bitset<11u>)' collect2: ld returned 1 exit status What packages should i (un)install to make the code work on my PC?

    Read the article

  • How to serve Rails application with Passenger/Apache without domain name?

    - by grifaton
    I am trying to serve a Rails application using Passenger and Apache on a Ubuntu server. The Passenger installation instructions say I should add the following to my Apache configuration file - I assume this is /etc/apache2/httpd.conf. <VirtualHost *:80> ServerName www.yourhost.com DocumentRoot /somewhere/public # <-- be sure to point to 'public'! <Directory /somewhere/public> AllowOverride all # <-- relax Apache security settings Options -MultiViews # <-- MultiViews must be turned off </Directory> </VirtualHost> However, I do not yet have a domain pointing at my server, so I'm not sure what I should put for the ServerName parameter. I have tried the IP address, but when I do that, restarting Apache gives apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Sun Jan 17 12:49:26 2010] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Sun Jan 17 12:49:36 2010] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results and pointing the browser at the IP address gives a 500 Internal Server Error. The closest I have got to something sensible is with <VirtualHost efate:80> ServerName efate DocumentRoot /root/jpf/public <Directory /root/jpf/public> AllowOverride all Options -MultiViews </Directory> </VirtualHost> where "efate" is my server's host name. But now pointing my browser at the server's IP address just gives a page saying "It works!" - presumably this is a default page, but I'm not sure where this is being served from. I might be wrong in thinking that the reason I have been unable to get this to work is related to not having a domain name. This is the first time I have used Apache directly - any help would be most gratefully received!

    Read the article

  • Uninstall php5 installed from source

    - by diegomichel
    I have tried to install php5 from source , and it worked... Then for some reason need to install the official packets, so i tried a make uninstall and for my surprise there is such make uninstall... so i tried delete all the installed files by hand. Then installed the official debian packages and it worked fine... till i need install sqlite module, which give me the following error: php --version PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626/pdo_sqlite.so' - /usr/lib/php5/20090626/pdo_sqlite.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626/sqlite.so' - /usr/lib/php5/20090626/sqlite.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 PHP 5.3.1-5 with Suhosin-Patch (cli) (built: Feb 22 2010 22:46:05) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2009 Zend Technologies So i remember that manual install i did, and i think there is some old lib installed causing that problem, the bad thing is that there is not such make uninstall on the source code of php5... php-5.2.13 > make uninstall make: *** No rule to make target `uninstall'. Stop. I have tried reinstall and purge all php related packages via aptitude with not success. OS: Debian Squeeze. uname -a Linux desktop 2.6.32-trunk-amd64 #1 SMP Sun Jan 10 22:40:40 UTC 2010 x86_64 GNU/Linux Any idea how to fix that?

    Read the article

  • configuration of zend frame work in ubuntu

    - by Rahul Mehta
    Hi, I have created a project zfapi by zf command in ubuntu. Now http://myserverpath.com/zfapi/ gives me listing of folder public application and others. http://myserverpath.com/zfapi/public give me the index page index.php. and i have made the UserController.php in application/controllers but by http://myserverpath.com/zfapi/user/ is saying user not found. what configuration i need to set for running it proper. I had set my /etc/apache2/apache2.conf added the following in the last . <VirtualHost *:80> DocumentRoot "/var/www/mypath/zfapi" <Directory "/var/www/mypath/zfapi"> Order allow,deny Allow from all AllowOverride all </Directory> </VirtualHost> is giving me this error while restarting server. [Sat Jan 08 13:32:53 2011] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Sat Jan 08 13:33:03 2011] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results what this i should do .?

    Read the article

  • Java constructor using generic types

    - by user37903
    I'm having a hard time wrapping my head around Java generic types. Here's a simple piece of code that in my mind should work, but I'm obviously doing something wrong. Eclipse reports this error in BreweryList.java: The method initBreweryFromObject() is undefined for the type <T> The idea is to fill a Vector with instances of objects that are a subclass of the Brewery class, so the invocation would be something like: BreweryList breweryList = new BreweryList(BrewerySubClass.class, list); BreweryList.java package com.beerme.test; import java.util.Vector; public class BreweryList<T extends Brewery> extends Vector<T> { public BreweryList(Class<T> c, Object[] j) { super(); for (int i = 0; i < j.length; i++) { T item = c.newInstance(); // initBreweryFromObject() is an instance method // of Brewery, of which <T> is a subclass (right?) c.initBreweryFromObject(); // "The method initBreweryFromObject() is undefined // for the type <T>" } } } Brewery.java package com.beerme.test; public class Brewery { public Brewery() { super(); } protected void breweryMethod() { } } BrewerySubClass.java package com.beerme.test; public class BrewerySubClass extends Brewery { public BrewerySubClass() { super(); } public void androidMethod() { } } I'm sure this is a complete-generics-noob question, but I'm stuck. Thanks for any tips!

    Read the article

  • JavaScript settings are on but I still have issues with web sites using JavaScript.

    - by Mike
    I have two computers that are the same at home except I have installed Adobe Acrobat 9 Pro Extended on my main computer along with the 2010 Outlook (Beta). I have issues when I log into a website that uses pop up calendars to select the date. I pasted it below. I checked my other computer and it is fine. I've checked the Java setting and they are correct. I am at a loss. Any suggestions? Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; HPDTDF; OfficeLiveConnector.1.4; OfficeLivePatch.1.3; eMusic DLM/4; .NET4.0C) Timestamp: Mon, 26 Jul 2010 22:03:59 UTC Message: 'CalendarPopup' is undefined Line: 390 Char: 2 Code: 0 Message: 'CalendarPopup' is undefined Line: 410 Char: 2 Code: 0

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • jqGrid (Delete row) - How to send additional POST data???

    - by ronanray
    Hi experts, I'm having problem with jqGrid delete mechanism as it only send "oper" and "id" parameters in form of POST data (id is the primary key of the table). The problem is, I need to delete a row based on the id and another column value, let's say user_id. How to add this user_id to the POST data??? I can summarize the issue as the following: How to get the cell value (user_id) of the selected row? AND, how to add that user_id to the POST data so it can be retrieved from the code behind where the actual delete process takes place. Sample codes: jQuery("#tags").jqGrid({ url: "subgrid.process.php, editurl: "subgrid.process.php?, datatype: "json", mtype: "POST", colNames:['id','user_id','status_type_id'], colModel:[{name:'id', index:'id', width:100, editable:true}, {name:'user_id', index:'user_id', width:200, editable:true}, {name:'status_type_id', index:'status_type_id', width:200} ], pager: '#pagernav2', rowNum:10, rowList:[10,20,30,40,50,100], sortname: 'id', sortorder: "asc", caption: "Test", height: 200 }); jQuery("#tags").jqGrid('navGrid','#pagernav2', {add:true,edit:false,del:true,search:false}, {}, {mtype:"POST",closeAfterAdd:true,reloadAfterSubmit:true}, // add options {mtype:"POST",reloadAfterSubmit:true}, // del options {} // search options ); Help....

    Read the article

  • apache htaccess rewrite rules make redirection loop

    - by Ali
    Hi guys, Have a very strange problem with Apache .htaccess URL Rewriting and Redirection. Here's my setup: I have a zend application with a single point of entry (index.php) directly under my apache document root (call this the "public" folder). I also have all other public files (images, js, css, etc.) under the public folder. Here, I also have a wordpress blog under the "blog" folder. There's an empty test folder too The Problem When I go to mydomain.com/blog, I get redirected to http://www.theredpin.com/blog (correctly), then to http://www.theredpin.com/blog/ (just with an extra / at end), finally to http://theredpin.com/blog/ -- and we're back where we started. The loop continues. What I don't understand is why would wordpress try to remove the www? I'm guessing it's wordpress because my empty test folder acts just fine! PLEASE HELP. I"M REALLY DESPERATE :( Thank you very much Other things that happen: When I go to mydomain.com, i correctly get redirected to www.mydomain.com When I go to www.mydomain.com, i correctly stay where I am When I go to www.mydomain.com/test or mydomain.com/test, behaviour is correct. Setup So my .htaccess file does the following: If there's no www., then add it and do a 301 redirect. Here's the code I use RewriteCond %{HTTP_HOST} ^mydomain.com [NC] RewriteRule ^(.*)$ http://www.mydomain.com/$1 [L,R=301] If the request is NOT for a resource (image, etc.), or the blog, then load zend application by rewriting to index.php RewriteRule !((^blog(/)?.*$)|(.(js|ico|gif|jpg|jpeg|png|css|cur|JPG|html|txt))$) index.php Thanks again for all your help!!! Ali

    Read the article

  • iPhone how to enable or disable UITabBar

    - by Dean Wagstaff
    I have a simple app with a tab bar which based upon user input disables one or more of the bar items. I understand I need to use a UITabBarDelegate which I have tried to use. However when I call the delegate method I get an uncaught exception error [NSObject doesNotRecognizeSelector]. I am not sure I am doing this all right or that I haven't missed something. Any suggestions. What I have now is the following: WMViewController.h import define kHundreds 0 @interface WMViewController : UIViewController { } @end WMViewController.m import "WMViewController.h" import "MLDTabBarControllerAppDelegate.h" @implementation WMViewController (IBAction)finishWizard{ MLDTabBarControllerAppDelegate *appDelegate = (MLDTabBarControllerAppDelegate *)[[UIApplication sharedApplication] delegate]; [appDelegate setAvailabilityTabIndex:0 Enable:TRUE]; } MLDTabBarControllerAppDelegate.h import @interface MLDTabBarControllerAppDelegate : NSObject { } (void) setAvailabilityTabIndex: (NSInteger) index Enable: (BOOL) enable; @end MLDTabBarControllerAppDelegate.m import "MLDTabBarControllerApplicationDelegate.h" import "MyListDietAppDelegate.h" @implementation MLDTabBarControllerAppDelegate (void) setAvailabilityTabIndex: (NSInteger) index Enable: (BOOL) enable { UITabBarController *controller = (UITabBarController *)[[[MyOrganizerAppDelegate getTabBarController] viewControllers ] objectAtIndex:index]; [[controller tabBarItem] setEnabled:enable]; } @end I get what appear to be a good controller object but crash on the [[controller tabBarItem]setEnabled:enable]; What am I missing... Any suggestions Thanks,

    Read the article

  • ASP.NET MVC 3: RouteExistingFiles = true seems to have no effect

    - by Oleksandr Kaplun
    I'm trying to understand how RouteExistingFiles works. So I've created a new MVC 3 internet project (MVC 4 behaves the same way) and put a HTMLPage.html file to the Content folder of my project. Now I went to the Global.Asax file and edited the RegisterRoutes function so it looks like this: public static void RegisterRoutes(RouteCollection routes) { routes.RouteExistingFiles = true; //Look for routes before looking if a static file exists routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new {controller = "Home", action = "Index", id = UrlParameter.Optional} // Parameter defaults ); } Now it should give me an error when I'm requesting a localhost:XXXX/Content/HTMLPage.html since there's no "Content" controller and the request definitely hits the default pattern. But instead I'm seeing my HTMLPage. What am I doing wrong here? Update: I think I'll have to give up. Even if I'm adding a route like this one: routes.MapRoute("", "Content/{*anything}", new {controller = "Home", action = "Index"}); it still shows me the content of the HTMLPage. When I request a url like ~/Content/HTMLPage I'm getting the Index page as expected, but when I add a file extenstion like .html or .txt the content is shown (or a 404 error if the file does not exist). If anyone can check this in VS2012 please let me know what result you're getting. Thank you.

    Read the article

  • Jqgrid + CodeIgniter

    - by Ivan
    I tried to make jqgrid work with codeigniter, but I could not do it, I only want to show the data from the table in json format... but nothing happens.. but i dont know what i am doing wrong, i cant see the table with the content i am calling. my controller class Grid extends Controller { public function f() { $this->load->model('dbgrid'); $var['grid'] = $this->dbgrid->getcontentfromtable(); foreach($var['grid'] as $row) { $responce->rows[$i]['id']=$row->id; $responce->rows[$i]['cell']=array($row->id,$row->id_catalogo); } $json = json_encode($responce); $this->load->view('vgrid',$json); } function load_view_grid() { $this->load->view('vgrid'); } } my model class Dbgrid extends Model{ function getcontentfromtable() { $sql = 'SELECT * FROM anuncios'; $query = $this->db->query($sql); $result = $query->result(); return $result; } } my view(script) $(document).ready(function() { jQuery("#list27").jqGrid({ url:'http://localhost/sitio/index.php/grid/f', datatype: "json", mtype: "post", height: 255, width: 600, colNames:['ID','ID_CATALOGO'], colModel:[ {name:'id',index:'id', width:65, sorttype:'int'}, {name:'id_catalogo',index:'id_catalogo', sorttype:'int'} ], rowNum:50, rowTotal: 2000, rowList : [20,30,50], loadonce:true, rownumbers: true, rownumWidth: 40, gridview: true, pager: '#pager27', sortname: 'item_id', viewrecords: true, sortorder: "asc", caption: "Loading data from server at once" }); }); hope someone help me

    Read the article

  • UISearchBar in a UITableView

    - by petert
    I'm trying to mimic the behaviour of a table view like one in the iPod app for Artists - it's a sectioned table view with a section index on the right and with a search bar at the top, but initially hidden when view shown. I am using sdk 3.1.2 and IB, so simply dragged a UISearchDisplayController in to my NIB - it does wire everything up ready for searching. The problem starts because I'm adding the UISearchBar in to the first section of the UITableView, because if I understand correctly I must do this so I can jump to the search bar by touching the search icon in the section index directly? When the table view appears I see the search bar but it has resized and I now have a white block behind the section index at the top. It does'nt take the color of the UISearchBar's surround which interestingly is different to that shown in Interface Builder. Searching around, I did find a tip to add a small navigation bar and a UISearchBar in a UIView, then add this to the table view cell - this works.. BUT the color of the navigation bar's background is what you'd expect normally (gray), not the different color as noted above?! More interesting, if I click the search bar to start a search, then click Cancel, all is fixed!!! The background along the whole tableview cell when the search bar is, is the same!!?! Thanks for any tips.

    Read the article

  • Error editing a blob field in ArcGIS Engine

    - by Victor Grigoriu
    I have a GIS layer backed by a MSSQL db. The features on the layer have, say, one field of type esriFieldTypeString and one of type esriFieldTypeBlob. I can edit the string field just fine, but, when I try to edit a blob, StopEditOperation() throws a very generic exception (message: "Error HRESULT E_FAIL has been returned from a call to a COM component.", error code: -2147467259). I couldn't find anything related in the server log. Does anyone have any idea what's going on? IServerContext serverContext = GetServerContext(agsConn, serviceName); ILayer layer = GetILayer(layerName, serverContext); IWorkspace workspace = GetIWorkspace(layer); var feature = GetIFeature(objectId, workspace, layer); var workspaceEdit = (IWorkspaceEdit)workspace; workspaceEdit.StartEditing(false); workspaceEdit.StartEditOperation(); var index = feature.Fields.FindField(featureDetailName); IField field = feature.Fields.get_Field(index); byte[] byteArray = {1, 2, 3}; MemoryBlobStream blob = new MemoryBlobStream(); ((IMemoryBlobStreamVariant)blob).ImportFromVariant(byteArray); if (field.CheckValue(blob)) { feature.set_Value(index, blob); } feature.Store(); workspaceEdit.StopEditOperation(); workspaceEdit.StopEditing(true); serverContext.RemoveAll(); serverContext.ReleaseContext();

    Read the article

  • Kohana 3: How to find the active item in a dynamic menu

    - by Svish
    Maybe not the best explanation, but hear me out. Say I have the following in a config file called menu.php: // Default controller is 'home' and default action is 'index' return array( 'items' => array( 'Home' => '', 'News' => 'news', 'Resources' => 'resources', ), ); I now want to print this out as a menu, which is pretty simple: foreach(Kohana::config('menu.items') as $title => $uri) { echo '<li>' . HTML::anchor($uri, $title) . '</li>'; } However, I want to find the $uri that matches the current controller and action. And if the action is the default one or not. What I want to end up with is that menu item should have id="active-item" if it is the linking to the current controller, but the default action. And id="active-subitem if it is linking to the current controller and the action is not the default one. Hope that made sense... Anyone able to help me out here? Both in how to do this in Kohana 3 and also how it should be done in Kohana 3. I'm sure there are lots of ways, but yeah... any help is welcome :) Examples: domain.com -- Home should be active-item since it is the default controller domain.com/home -- Home should be active-item domain.com/home/index -- Home should be active-item since index is the default action domain.com/resources -- Resources should be active-item domain.com/resources/get/7 -- Resources should be active-subitem since get is not the default action

    Read the article

  • Where does lucene .net cache the search results?

    - by Lanceomagnifico
    Hi, I'm trying to figure out where Lucene stores the cached query results, and how it's configured to do so - and how long it caches for. This is for an ASP.NET 3.5 solution. I'm getting this problem: If I run a search and sort the result by a particular product field, it seems to work the very first time each search and sort combination is used. If I then go in and change some product attributes, reindex and run the same search and sort, I get the products returned in the same order as the very first result. example Product A is named: foo Product B is named: bar For the first search, sort by name desc. This results in: Product A Product B Now mix up the data a bit: Change names to: Product A named: bar Product B named: foo reindex verify that the index contains the changes for these two products. search Result: Product A Product B Since I changed the alphabetical order of the names, I expected: Product B Product A So I think that Lucene is caching the search results. (Which, btw, is a very good thing.) I just need to know where/how to clear these results. I've tried deleting the index files and doing an IISreset to clear the memory, but it seems to have no effect. So I'm thinking there is another set of Lucene files outside of the indexes that Lucene uses for caching. EDIT I just found out that you must create the index for field you wish to sort on as un-tokenized. I had the field as tokenized, so sorting didn't work.

    Read the article

  • Trouble using ‘this.files’ in GruntJS Tasks

    - by whomba
    Background: So I’m new to writing grunt tasks, so hopefully this isn’t a stupid question. Purpose of my grunt task: Given a file (html / jsp / php / etc) search all the link / script tags, compare the href / src to a key, and replace with the contents of the file. Proposed task configuration: myTask:{ index:{ files:{src:"test/testcode/index.html", dest:"test/testcode/index.html"}, css:[ { file: "/css/some/css/file.css", fileToReplace:"/target/css/file.css" }, { file: "/css/some/css/file2.css", fileToReplace:"/target/css/file2.css" } ], js:[ { file: "/css/some/css/file.css", fileToReplace:”/target/css/file.css" }, { file: "/css/some/css/file2.css", fileToReplace:"/target/css/file2.css" } ] } } Problem: Now, per my understanding, when dealing with files, you should reference the ‘this.files’ object because it handles a bunch of nice things for you, correct? Using intelliJ and the debugger, I see the following when i watch the ‘this.files’ object: http://i.imgur.com/Gj6iANo.png What I would expect to see is src and dest to be the same, not dest ===‘src’ and src === undefined. So, What is it that I am missing? Is there some documentation on ‘this.files’ I can read about? I’ve tried setting the files attribute in the target to a number of other formats per grunts spec, none seem to work (for this script, ideally the src / dest would be the same, so the user would just have to enter it once, but i’m not even getting in to that right now.) Thanks for the help

    Read the article

  • django error ,about django-sphinx

    - by zjm1126
    from django.db import models from djangosphinx.models import SphinxSearch class MyModel(models.Model): search = SphinxSearch() # optional: defaults to db_table # If your index name does not match MyModel._meta.db_table # Note: You can only generate automatic configurations from the ./manage.py script # if your index name matches. search = SphinxSearch('index_name') # Or maybe we want to be more.. specific searchdelta = SphinxSearch( index='index_name delta_name', weights={ 'name': 100, 'description': 10, 'tags': 80, }, mode='SPH_MATCH_ALL', rankmode='SPH_RANK_NONE', ) queryset = MyModel.search.query('query') results1 = queryset.order_by('@weight', '@id', 'my_attribute') results2 = queryset.filter(my_attribute=5) results3 = queryset.filter(my_other_attribute=[5, 3,4]) results4 = queryset.exclude(my_attribute=5)[0:10] results5 = queryset.count() # as of 2.0 you can now access an attribute to get the weight and similar arguments for result in results1: print result, result._sphinx # you can also access a similar set of meta data on the queryset itself (once it's been sliced or executed in any way) print results1._sphinx and Traceback (most recent call last): File "D:\zjm_code\sphinx_test\models.py", line 1, in <module> from django.db import models File "D:\Python25\Lib\site-packages\django\db\__init__.py", line 10, in <module> if not settings.DATABASE_ENGINE: File "D:\Python25\Lib\site-packages\django\utils\functional.py", line 269, in __getattr__ self._setup() File "D:\Python25\Lib\site-packages\django\conf\__init__.py", line 38, in _setup raise ImportError("Settings cannot be imported, because environment variable %s is undefined." % ENVIRONMENT_VARIABLE) ImportError: Settings cannot be imported, because environment variable DJANGO_SETTINGS_MODULE is undefined.

    Read the article

  • Sort multiple columns while using a bindingsource or bindinglist

    - by JF
    Hi everyone. I have a problem I am trying to fix and it's sorting a DataGridView on multiple columns. I have read that this option is not a feature built-in the DataGridView and I have to implement it. I have found multiple solutions, but none quite got to do the work. I'm also quite a newbie in C# and I don't know much of the .Net library. I have also read on the MSDN site for info on different classes that might be of use, but no success. Now, let's get to the point. I have a DataGridView, with a BindingList (originally, a BindingSource) that I want to sort, but by multiple keys. My DataGrid has 9 columns and the user should be able to sort on any column. For example, let's say my Datagrid has 3 columns, named : Index, ID, Name. The user wants to sort by Name, implicitly, the next order would be Index and then ID. So, in case 2 names are identical, Index should be the next sort option. Any ideas how this can be made?

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >