Search Results

Search found 20566 results on 823 pages for 'folder structure'.

Page 658/823 | < Previous Page | 654 655 656 657 658 659 660 661 662 663 664 665  | Next Page >

  • How to add favicon.ico in ASP.NET site

    - by Tapas Bose
    The solution structure of my application is: Now I am in Login.aspx and I am willing to add favicon.ico, placed in the root, in that page. What I am doing is: <link id="Link1" runat="server" rel="shortcut icon" href="../favicon.ico" type="image/x-icon" /> <link id="Link2" runat="server" rel="icon" href="../favicon.ico" type="image/ico" /> Also I have tried: <link id="Link1" runat="server" rel="shortcut icon" href="favicon.ico" type="image/x-icon" /> <link id="Link2" runat="server" rel="icon" href="favicon.ico" type="image/ico" /> But these aren't working. I have cleared the browser cache but no luck. What will be the path to the favicon.ico from: Login.aspx Site.master Thank you. The login page's URL: http://localhost:2873/Pages/Login.aspx and the favicon.ico's URL: http://localhost:2873/favicon.ico. I am unable to see the favicon.ico after changing my code as: <link id="Link1" rel="shortcut icon" href="/favicon.ico" type="image/x-icon" /> <link id="Link2" rel="icon" href="/favicon.ico" type="image/ico" />

    Read the article

  • Suggest a good method with least lookup time complexity

    - by Amrish
    I have a structure which has 3 identifier fields and one value field. I have a list of these objects. To give an analogy, the identifier fields are like the primary keys to the object. These 3 fields uniquely identify an object. Class { int a1; int a2; int a3; int value; }; I would be having a list of say 1000 object of this datatype. I need to check for specific values of these identity key values by passing values of a1, a2 and a3 to a lookup function which would check if any object with those specific values of a1, a2 and a3 is present and returns that value. What is the most effective way to implement this to achieve a best lookup time? One solution I could think of is to have a 3 dimensional matrix of length say 1000 and populate the value in it. This has a lookup time of O(1). But the disadvantages are. 1. I need to know the length of array. 2. For higher identity fields (say 20), then I will need a 20 dimension matrix which would be an overkill on the memory. For my actual implementation, I have 23 identity fields. Can you suggest a good way to store this data which would give me the best look up time?

    Read the article

  • How to decode a JSON String

    - by ilnur777
    Hi, everybody! Could I ask you to help me to decode this JSON code: $json = '{"inbox":[{"from":"55512351","date":"29\/03\/2010","time":"21:24:10","utcOffsetSeconds":3600,"recipients":[{"address":"55512351","name":"55512351","deliveryStatus":"notRequested"}],"body":"This is message text."},{"from":"55512351","date":"29\/03\/2010","time":"21:24:12","utcOffsetSeconds":3600,"recipients":[{"address":"55512351","name":"55512351","deliveryStatus":"notRequested"}],"body":"This is message text."},{"from":"55512351","date":"29\/03\/2010","time":"21:24:13","utcOffsetSeconds":3600,"recipients":[{"address":"55512351","name":"55512351","deliveryStatus":"notRequested"}],"body":"This is message text."},{"from":"55512351","date":"29\/03\/2010","time":"21:24:13","utcOffsetSeconds":3600,"recipients":[{"address":"55512351","name":"55512351","deliveryStatus":"notRequested"}],"body":"This is message text."}]}'; I would like to organize above structure to this: Note 1: Folder: inbox From (from): ... Date (date): ... Time (time): ... utcOffsetSeconds: ... Recepient (address): ... Recepient (name): ... Status (deliveryStatus): ... Text (body): ... Note 2: ... Thank you in advance!

    Read the article

  • Relative Footer with Absolute DIV Elements

    - by Alex
    Hi, I'm creating a wordpress theme where the header and the nav bar are positioned absolutely, and the footer needs to be positioned relatively depending on the height of the content on each page. When I try to set the footer's positioning to relative, however, it appears at the top of the page underneath the content. All elements are in a relatively positioned container. Is there any way to fix this, or to dynamically get the height of the content plus the header and nav bar? The structure of the page is as follows: <div id="container"> <div id="header"> </div> <div id="navbar"> </div> <div id="content"> Dynamically generated and variable height content here. </div> <div id="footer"> </div> </div> And the relevant css is: #container { position: relative; margin:0px auto; width: 945px; text-align: left; } #header, #navbar{ background-color: #FFFFFF; position: absolute; margin-right: auto; margin-left: auto; width: 945px; float: left; } #footer { height: 35px; margin-right: auto; margin-left: auto; width: 945px; position: relative; padding-top: 20px } Thanks for the help.

    Read the article

  • Accommodating hierarchical data in SQL Server 2005 database design

    - by Remnant
    Context I am fairly new to database design (=know the basics) and am grappling with how best to design my database for a project I am currently working on. In short, my database will keep a log of which employees have attended certain health and safety courses throughout the year. There are multiple types of course e.g. moving objects, fire safety, hygiene etc. In terms of my database design I need to accommodate the following: Each location can have multiple divisions Each division can have multiple departments Each department can have multiple functions Each function can have multiple job roles Each job role can have different course requirements Also note that the structure at each location may not be the same e.g. the departments within divisions are not the same across locations and the functions within departments may also differ. Edit - updated to better articulate problem Let's assume I am just looking at Location, Division and Department and I have my database as follows: LocationTable DivisionTable DepartmentTable LocationID(PK) DivisionID(PK) DepartmentID(PK) LocationName DivisionName DepartmentName There is a many-to-many relationship between Locations and Divisions and also between Departments and Divisions. Suppose I set up a 'Junction Table' as follows: Location_Division LocationID(FK) DivisionID(FK) Using Location_Division I could easily pull back the Divisions for any Location. However, suppose I want to pull back all departments for a given Division in a given Location. If I set up another 'Junction Table' for Division and Department then I can't see how I would differentiate Division by Location? Division_Department DivisionID(FK) DepartmentID(FK) Location_Division Division_Department LocationID DivisionID DivisionID DepartmentID 1 1 1 1 1 2 1 2 2 1 2 1 2 2 2 2 Do I need to expand the number of columns in my 'Junction Table' e.g. Location_Division_Department LocationID(FK) DivisionID(FK) DepartmentID(FK) Location_Division_Department LocationID DivisionID DepartmentID 1 1 1 1 1 2 1 1 3 2 1 1 2 1 2 2 1 3

    Read the article

  • Shell script to decrypt and move files from one directory to another?

    - by KittyYoung
    So, I have a directory, and in it are several files. I'm trying to decrypt those files and then move them to another directory. I can't seem to figure out how to set the output filename and move it. So, the directory structure looks like this: /Applications/MAMP/bin/decryptandmove.sh /Applications/MAMP/bin/passtext.txt /Applications/MAMP/bin/encrypted/test1.txt.pgp /Applications/MAMP/bin/encrypted/test2.txt.pgp /Applications/MAMP/htdocs/www/decrypted/ For all of the files found in the encrypted directory, I'm trying to decrypt it and then move them into www/decrypted/. I don't know what the filenames in the encrypted directory will be ahead of time (this script will eventually run via cron), so I wanted to just output the decrypted file with the same filename, but without the pgp. So, the result would be: /Applications/MAMP/bin/decryptandmove.sh /Applications/MAMP/bin/passtext.txt /Applications/MAMP/bin/encrypted/ /Applications/MAMP/htdocs/decrypted/test1.txt.pgp /Applications/MAMP/htdocs/decrypted/test2.txt.pgp So, this is all I have so far, and it doesn't work... FILE and FILENAME are both wrong... I haven't even gotten to the moving part of it yet.... Help? I've written exactly one shell script ever, and it was so simple, a monkey could have done it... Feeling out of depth here... pass_phrase=`cat passtext.txt|awk '{print $1}'` for FILE in '/Applications/MAMP/bin/encrypted/'; do FILENAME=$(basename $FILE .pgp) gpg --passphrase $pass_phrase --output $FILENAME --decrypt $FILE done

    Read the article

  • Updating multiple related tables in SQLite

    - by PerryJ
    Just some background, sorry so long winded. I'm using the System.Data.SQLite ADO.net adapter to create a local sqlite database and this will be the only process hitting the database, so I don't need to worry about concurrency. I'm building the database from various sources and don't want to build this all in memory using datasets or dataadapters or anything like that. I want to do this using SQL (DdCommands). I'm not very good with SQL and complete noob in sqlite. I'm basically using sqlite as a local database / save file structure. The database has a lot of related tables and the data has nothing to do with People or Regions or Districts, but to use a simple analogy, imagine: Region table with auto increment RegionID, RegionName column and various optional columns. District table with auto increment DistrictID, DistrictName, RegionId, and various optional columns Person table with auto increment PersonID, PersonName, DistrictID, and various optional columns So I get some data representing RegionName, DistrictName,PersonName, and other Person related data. The Region, District and/or Person may or may not be created at this point. Once again, not being the greatest with this, my thoughts would be something like: Check to see if Region exists and if so get the RegionID else create it and get RegionID Check to see if District exists and if so get the DistrictID else create it adding in RegionID from above and get DistrictID Check to see if Person exists and if so get the PersonID else create it adding in DistrictID from above and get PersonID Update Person with rest of data. In MS SQL Server I would create a stored procedure to handle all this. Only way I can see to do this with sqlite is a lot of commands. So I'm sure I'm not getting this. I've spent hours looking around on various sites but just don't feel like I'm going down the right road. Any suggestions would be greatly appreciated.

    Read the article

  • Partial mapping in Entity Framework 4

    - by Dimi Toulakis
    Hi guys, I want to be able to do the following: I have a model and inside there I do have an entity. This entity has the following structure: public class Client { public int Id { get; set; } public string Name { get; set; } public string Description { get; set; } } What I want now, is to just get the client name based on the id. Therefore I wrote a stored procedure which is doing this. CREATE PROCEDURE [Client].[GetBasics] @Id INT AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; SELECT Name FROM Client.Client INNER JOIN Client.Validity ON ClientId = Client.Id WHERE Client.Id = @Id; END Now, going back to VS, I do update the model from the database with the stored procedure included. Next step is to map this stored procedure to the client entity as a function import. This also works fine. Trying now to load one client's name results into an error during runtime... "The data reader is incompatible with the specified 'CSTestModel.Client'. A member of the type, 'Id', does not have a corresponding column in the data reader with the same name." I am OK with the message. I know how to fix this (returning as resultset Id, Name, Description). My idea behind this question is the following: I just want to load parts of the entity, not the complete entity itself. Is there a solution to my problem (except creating complex types)? And if yes, can someone point me to the right direction? Many thanks, Dimi

    Read the article

  • Malloc corrupting already malloc'd memory in C

    - by Kyte
    I'm currently helping a friend debug a program of his, which includes linked lists. His list structure is pretty simple: typedef struct nodo{ int cantUnos; char* numBin; struct nodo* sig; }Nodo; We've got the following code snippet: void insNodo(Nodo** lista, char* auxBin, int auxCantUnos){ printf("*******Insertando\n"); int i; if (*lista) printf("DecInt*%p->%p\n", *lista, (*lista)->sig); Nodo* insert = (Nodo*)malloc(sizeof(Nodo*)); if (*lista) printf("Malloc*%p->%p\n", *lista, (*lista)->sig); insert->cantUnos = auxCantUnos; insert->numBin = (char*)malloc(strlen(auxBin)*sizeof(char)); for(i=0 ; i<strlen(auxBin) ; i++) insert->numBin[i] = auxBin[i]; insert-numBin[i] = '\0'; insert-sig = NULL; Nodo* aux; [etc] (The lines with extra indentation were my addition for debug purposes) This yields me the following: *******Insertando DecInt*00341098->00000000 Malloc*00341098->2832B6EE (*lista)-sig is previously and deliberately set as NULL, which checks out until here, and fixed a potential buffer overflow (he'd forgotten to copy the NULL-terminator in insert-numBin). I can't think of a single reason why'd that happen, nor I've got any idea on what else should I provide as further info. (Compiling on latest stable MinGW under fully-patched Windows 7, friend's using MinGW under Windows XP. On my machine, at least, in only happens when GDB's not attached.) Any ideas? Suggestions? Possible exorcism techniques? (Current hack is copying the sig pointer to a temp variable and restore it after malloc. It breaks anyways. Turns out the 2nd malloc corrupts it too. Interestingly enough, it resets sig to the exact same value as the first one).

    Read the article

  • According to MSDN ReadFile() Win32 function may incorrectly report read operation completion. When?

    - by Martin Dobšík
    The MSDN states in its description of ReadFile() function (http://msdn.microsoft.com/en-us/library/aa365467%28VS.85%29.aspx): “If hFile is opened with FILE_FLAG_OVERLAPPED, the lpOverlapped parameter must point to a valid and unique OVERLAPPED structure, otherwise the function can incorrectly report that the read operation is complete.” I have some applications that are violating the above recommendation and I would like to know the severity of the problem. I mean the program uses named pipe that has been created with FILE_FLAG_OVERLAPPED, but it reads from it using the following call: ReadFile(handle, &buf, n, &n_read, NULL); That means it passes NULL as the lpOverlapped parameter. That call should not work correctly in some circumstances according to documentation. I have spent a lot of time trying to reproduce the problem, but I was unable to! I always got all data in right place at right time. I was testing only Named Pipes though. Would anybody know when can I expect that ReadFile() will incorrectly return and report successful completion even the data are not yet in the buffer? What would have to happen in order to reproduce the problem? Does it happen with files, pipes, sockets, consoles, or other devices? Do I have to use particular version of OS? Or particular version of reading (like register the handle to I/O completion port)? Or particular synchronization of reading and writing processes/threads? Or when would that fail? It works for me :/ Please help! With regards Martin

    Read the article

  • SQL select descendants of a row

    - by Joey Adams
    Suppose a tree structure is implemented in SQL like this: CREATE TABLE nodes ( id INTEGER PRIMARY KEY, parent INTEGER -- references nodes(id) ); Although cycles can be created in this representation, let's assume we never let that happen. The table will only store a collection of roots (records where parent is null) and their descendants. The goal is to, given an id of a node on the table, find all nodes that are descendants of it. A is a descendant of B if either A's parent is B or A's parent is a descendant of B. Note the recursive definition. Here is some sample data: INSERT INTO nodes VALUES (1, NULL); INSERT INTO nodes VALUES (2, 1); INSERT INTO nodes VALUES (3, 2); INSERT INTO nodes VALUES (4, 3); INSERT INTO nodes VALUES (5, 3); INSERT INTO nodes VALUES (6, 2); which represents: 1 `-- 2 |-- 3 | |-- 4 | `-- 5 | `-- 6 We can select the (immediate) children of 1 by doing this: SELECT a.* FROM nodes AS a WHERE parent=1; We can select the children and grandchildren of 1 by doing this: SELECT a.* FROM nodes AS a WHERE parent=1 UNION ALL SELECT b.* FROM nodes AS a, nodes AS b WHERE a.parent=1 AND b.parent=a.id; We can select the children, grandchildren, and great grandchildren of 1 by doing this: SELECT a.* FROM nodes AS a WHERE parent=1 UNION ALL SELECT b.* FROM nodes AS a, nodes AS b WHERE a.parent=1 AND b.parent=a.id UNION ALL SELECT c.* FROM nodes AS a, nodes AS b, nodes AS c WHERE a.parent=1 AND b.parent=a.id AND c.parent=b.id; How can a query be constructed that gets all descendants of node 1 rather than those at a finite depth? It seems like I would need to create a recursive query or something. I'd like to know if such a query would be possible using SQLite. However, if this type of query requires features not available in SQLite, I'm curious to know if it can be done in other SQL databases.

    Read the article

  • MySQL stuck on "using filesort" when doing an "order by"

    - by noko
    I can't seem to get my query to stop using filesort. This is my query: SELECT s.`pilot`, p.`name`, s.`sector`, s.`hull` FROM `pilots` p LEFT JOIN `ships` s ON ( (s.`game` = p.`game`) AND (s.`pilot` = p.`id`) ) WHERE p.`game` = 1 AND p.`id` <> 2 AND s.`sector` = 43 AND s.`hull` > 0 ORDER BY p.`last_move` DESC Table structures: CREATE TABLE IF NOT EXISTS `pilots` ( `id` mediumint(5) unsigned NOT NULL AUTO_INCREMENT, `game` tinyint(3) unsigned NOT NULL DEFAULT '0', `last_move` int(10) NOT NULL DEFAULT '0', UNIQUE KEY `id` (`id`), KEY `last_move` (`last_move`), KEY `game_id_lastmove` (`game`,`id`,`last_move`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=8 ; CREATE TABLE IF NOT EXISTS `ships` ( `id` mediumint(5) unsigned NOT NULL AUTO_INCREMENT, `game` tinyint(3) unsigned NOT NULL DEFAULT '0', `pilot` mediumint(5) unsigned NOT NULL DEFAULT '0', `sector` smallint(5) unsigned NOT NULL DEFAULT '0', `hull` smallint(4) unsigned NOT NULL DEFAULT '50', UNIQUE KEY `id` (`id`), KEY `game` (`game`), KEY `pilot` (`pilot`), KEY `sector` (`sector`), KEY `hull` (`hull`), KEY `game_2` (`game`,`pilot`,`sector`,`hull`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=8 ; The explain: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE p ref id,game_id_lastmove game_id_lastmove 1 const 7 Using where; Using filesort 1 SIMPLE s ref game,pilot,sector... game_2 6 const,fightclub_alpha.p.id,const 1 Using where; Using index edit: I cut some of the unnecessary pieces out of my queries/table structure. Anybody have any ideas?

    Read the article

  • Why do properties require explicit typing during compilation?

    - by ctpenrose
    Compilation using property syntax requires the type of the receiver to be known at compile time. I may not understand something, but this seems like a broken or incomplete compiler implementation considering that Objective-C is a dynamic language. The property "comment" is defined with: @property (nonatomic, retain) NSString *comment; and synthesized with: @synthesize comment; "document" is an instance of one of several classes which conform to: @protocol DocumentComment <NSObject> @property (nonatomic, retain) NSString *comment; @end and is simply declared as: id document; When using the following property syntax: stringObject = document.comment; the following error is generated by gcc: error: request for member 'comment' in something not a structure or union However, the following equivalent receiver-method syntax, compiles without warning or error and works fine, as expected, at run-time: stringObject = [document comment]; I don't understand why properties require the type of the receiver to be known at compile time. Is there something I am missing? I simply use the latter syntax to avoid the error in situations where the receiving object has a dynamic type. Properties seem half-baked.

    Read the article

  • Java equivalent to VS solution file

    - by Chris
    I'm a C# guy trying to learn Java. I understand the syntax and the basic architecture of the Java platform, and have no problem doing smaller projects myself, but I'd really like to be able to download some open source projects to learn from the work of others. However, I'm running into a stumbling block that I can't seem to find any information on. When I download an open source .NET project, I can open the .sln file with visual studio and everything just loads. Sure, there's occasionally a missing reference or something, but there's really very little configuration required to get things going. I'm not sensing the same ease of use with Java. I'm using eclipse at the moment, and it feels like for every project I have to create a brand new Eclipse project using "create from existing source", and almost nothing compiles properly without significant reconfiguration. In the case of web projects, it's even worse, because Eclipse doesn't appear to support creating a web project from existing source. I have to create a standard Java project from source, then then apparently modify the project file to include the bindings for the web toolkit stuff to work properly. Assuming I want to be able to contribute to a project later on, I shouldn't have to be making such drastic changes to the file structure to get my IDE to a workable state. What am I missing?

    Read the article

  • An SVN error (200 OK) when checking out from my online repo

    - by J. LaRosee
    I'm trying to setup my first repo on my host and am getting this error when I use Tortoise to checkout the project: Error: OPTIONS of 'http://mywebsite.com/svn/myproject': 200 OK (http://mywebsite.com) Here is what I did: 1) ssh into my host and head to /home/myaccnt and 'svnadmin create svn' 2) create my project repo: 'svn mkdir svn/myproject' 3) add files to the repo: cd /home/myaccnt/.../myproject (which has /tags, /branch, /trunk); 'svn import file:///home/myaccnt/svn/myproject' (the big ole list of files being added is seen at this point.) At this point I think that I've setup my repo and imported my project into the repo. So, I'm ready to checkout using TortoiseSVN on my Windows box. So: 4) In the folder I'd like to checkout to, I rightclick and 'SVN Checkout' and then make sure my URL is: http://mywebsite.com/svn/myproject Result? Error: OPTIONS of 'http://mywebsite.com/svn/myproject': 200 OK (http://mywebsite.com) Anyone have any thoughts for me? I'm likely missing something fundemental w/ the structure of my repo or htaccess... or something. Many thanks in advance. -JL

    Read the article

  • Parsing string, with Boost Spirit 2, to fill data in user defined struct

    - by Surya
    I'm using Boost.Spirit which was distributed with Boost-1.42.0 with VS2005. My problem is like this. I've this string which was delimted with commas. The first 3 fields of it are strings and rest are numbers. like this. String1,String2,String3,12.0,12.1,13.0,13.1,12.4 My rule is like this qi::rule<string::iterator, qi::skip_type> stringrule = *(char_ - ',') qi::rule<string::iterator, qi::skip_type> myrule= repeat(3)[*(char_ - ',') >> ','] >> (double_ % ',') ; I'm trying to store the data in a structure like this. struct MyStruct { vector<string> stringVector ; vector<double> doubleVector ; } ; MyStruct var ; I've wrapped it in BOOST_FUSION_ADAPT_STRUCTURE to use it with spirit. BOOST_FUSION_ADAPT_STRUCT (MyStruct, (vector<string>, stringVector) (vector<double>, doubleVector)) My parse function parses the line and returns true and after qi::phrase_parse (iterBegin, iterEnd, myrule, boost::spirit::ascii::space, var) ; I'm expecting var.stringVector and var.doubleVector are properly filled. but it is not the case. What is going wrong ? Thanks in advance, Surya

    Read the article

  • MooTools Event Delegation using HTML5 data attributes.

    - by Anurag
    Is it possible to have event delegation using the HTML5 data attributes in MooTools? The HTML structure I have is: ?<div id="parent"> <div>not selectable</div> <div data-selectable="true">selectable</div> <div>not selectable either.</div> <div data-selectable="true">also selectable</div> </div>???????????????????????????????????????????????????????????????????????? And I want to setup <div id="parent"> to listen to all clicks only on child elements that have the data-selected attribute. Please let me know if I'm doing something wrong: The events are being setup as: $("parent").addEvent("click:relay([data-selectable])", function(event, el) { alert(this.get('text')); }); but the click callback is fired on clicking all div's, not just the ones with a data-selectable attribute defined. You can see this example on http://jsfiddle.net/NUGD4/ A workaround is to adding this as a CSS class, which works with delegation but I would prefer to be able to use data-attributes as it's used throughout the application.

    Read the article

  • Static Data Structures on Embedded Devices (Android in particular)

    - by Mark
    I've started working on some Android applications and have a question regarding how people normally deal with situations where you have a static data set and have an application where that data is needed in memory as one of the standard java collections or as an array. In my current specific issue i have a spreadsheet with some pre-calculated data. It consists of ~100 rows and 3 columns. 1 Column is a string, 1 column is a float, 1 column is an integer. I need access to this data as an array in java. It seems like i could: 1) Encode in XML - This would be cpu intensive to decode in my experience. 2) build into SQLite database - seems like a lot of overhead for static access to data i only need array style access to in ram. 3) Build into binary blob and read in. (never done this in java, i miss void *) 4) Build a python script to take the CSV version of my data and spit out a java function that adds the values to my desired structure with hard coded values. 5) Store a string array via androids resource mechanism and compute the other 2 columns on application load. In my case the computation would require a lot of calls to Math.log, Math.pow and Math.floor which i'd rather not have to do for load time and battery usage reasons. I mostly work in low power embedded applications in C and as such #4 is what i'm used to doing in these situations. It just seems like it should be far easier to gain access to static data structures in java/android. Perhaps I'm just being too battery usage conscious and in my single case i imagine the answer is that it doesn't matter much, but if every application took that stance it could begin to matter. What approaches do people usually take in this situation? Anything I missed?

    Read the article

  • "import numpy" tries to load my own package

    - by Sebastian
    I have a python (2.7) project containing my own packages util and operator (and so forth). I read about relative imports, but perhaps I didn't understand. I have the following directory structure: top-dir/ util/__init__.py (empty) util/ua.py util/ub.py operator/__init__.py ... test/test1.py The test1.py file contains #!/usr/bin/env python2 from __future__ import absolute_import # removing this line dosn't change anything. It's default functionality in python2.7 I guess import numpy as np It's fine when I execute test1.py inside the test/ folder. But when I move to the top-dir/ the import numpy wants to include my own util package: Traceback (most recent call last): File "tests/laplace_2d_square.py", line 4, in <module> import numpy as np File "/usr/lib/python2.7/site-packages/numpy/__init__.py", line 137, in <module> import add_newdocs File "/usr/lib/python2.7/site-packages/numpy/add_newdocs.py", line 9, in <module> from numpy.lib import add_newdoc File "/usr/lib/python2.7/site-packages/numpy/lib/__init__.py", line 4, in <module> from type_check import * File "/usr/lib/python2.7/site-packages/numpy/lib/type_check.py", line 8, in <module> import numpy.core.numeric as _nx File "/usr/lib/python2.7/site-packages/numpy/core/__init__.py", line 45, in <module> from numpy.testing import Tester File "/usr/lib/python2.7/site-packages/numpy/testing/__init__.py", line 8, in <module> from unittest import TestCase File "/usr/lib/python2.7/unittest/__init__.py", line 58, in <module> from .result import TestResult File "/usr/lib/python2.7/unittest/result.py", line 9, in <module> from . import util File "/usr/lib/python2.7/unittest/util.py", line 2, in <module> from collections import namedtuple, OrderedDict File "/usr/lib/python2.7/collections.py", line 9, in <module> from operator import itemgetter as _itemgetter, eq as _eq ImportError: cannot import name itemgetter The troublesome line is either from . import util or perhaps from operator import itemgetter as _itemgetter, eq as _eq What can I do?

    Read the article

  • Django-modpython project in a directory

    - by Ankit Jaiswal
    Hi All, I am deploying a Django project on apache server with mod_python in linux. I have created a directory structure like: /var/www/html/django/demoInstall where demoInstall is my project. In the httpd.conf I have put the following code. <Location "/django/demoInstall"> SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE demoInstall.settings PythonOption django.root django/demoInstall PythonDebug On PythonPath "['/var/www/html/django'] + sys.path" </Location> It is getting me the django environment but the issue is that the urls mentioned in urls.py are not working correctly. In my url file I have mentioned the url like: (r'^$', views.index), Now, in the browser I am putting the url like : http://domainname/django/demoInstall/ and I am expecting the views.index to be invoked. But I guess it is expecting the url to be only: http://domainname/ . When I change the url mapping to: (r'^django/demoInstall$', views.index), it works fine. Please suggest as I do not want to change all the mappings in url config file. Thanks in advance.

    Read the article

  • Button Click Event Getting Lost

    - by AlishahNovin
    I have a Menu and Submenu structure in Silverlight, and I want the submenu to disappear when the parent menu item loses focus - standard Menu behavior. I've noticed that the submenu's click events are lost when a submenu item is clicked, because the parent menu item loses focus and the submenu disappears. It's easier to explain with code: ParentMenuBtn.Click += delegate { SubMenu.Visibility = (SubMenu.Visibility == Visibility.Visible) ? SubMenu.Collapsed : SubMenu.Visible; }; ParentMenuBtn.LostFocus += delegate { SubMenu.Visibility = Visibility.Collapsed; }; SubMenuBtn.Click += delegate { throw new Exception("This will never be thrown."); }; In my example, when SubMenuBtn is clicked, the first event that triggers is ParentMenuBtn.LostFocus(), which hides the container of SubMenuBtn. Once the container's visibility collapses, the Click event is never triggered. I'd rather avoid having to hide the sub-menu each time, but I'm a little surprised that the Click event is never triggered as a result... Anyone have any thoughts about this?

    Read the article

  • PDF parsing file trailer

    - by Ralph
    It is not clear from the PDF ISO standard document (PDF32000-2008) whether a comment may follow the startxref keyword: startxref Byte_offset_of_last_cross-reference_section %%EOF The standard does seem to imply that comments may appear anywhere: 7.2.3 Comments Any occurrence of the PERCENT SIGN (25h) outside a string or stream introduces a comment. The comment consists of all characters after the PERCENT SIGN and up to but not including the end of the line, including regular, delimiter, SPACE (20h), and HORZONTAL TAB characters (09h). A conforming reader shall ignore comments, and treat them as single white-space characters. That is, a comment separates the token preceding it from the one following it. EXAMPLE The PDF fragment in this example is syntactically equivalent to just the tokens abc and 123. abc% comment ( /%) blah blah blah 123 Comments (other than the %PDF–n.m and %%EOF comments described in 7.5, "File Structure") have no semantics. They are not necessarily preserved by applications that edit PDF files. If they are allowed to appear after the startxref, parsing the file becomes more difficult because you do not know how far to back up from the %%EOF comment to start parsing to find the byte offset. Any ideas?

    Read the article

  • Metamorphs Messing Up CSS in Ember.js Views

    - by Austin Fatheree
    I'm using Ember.js / handlebars to loop through a collection and spit out some items that I'd like bootstrap to handle nice and responsive like. Here is the issue: The bootstrap-responsive css has some declrations in it like: .row-fluid > [class*="span"]:first-child { margin-left: 0; } and .row-fluid:before, .row-fluid:after { display: table; content: ""; } These rules seem to target the first children. When I loop through my collection in handlebars I end up with a bunch of metamorph code around my items: <div class="row-fluid"> {{#each restaurantList}} {{view GS.vHomePageRestList content=this class="span6"}} {{/each}} </div> Here is what is produced: <div class="row-fluid"> <script id="metamorph-9-start" type="text/x-placeholder"></script> <script id="metamorph-104-start" type="text/x-placeholder"></script> <div id="ember2527" class="ember-view span6"> My View </div> <script id="metamorph-104-end" type="text/x-placeholder"></script> <script id="metamorph-105-start" type="text/x-placeholder"></script> <div id="ember2574" class="ember-view span6"> My View 2 </div> <script id="metamorph-105-end" type="text/x-placeholder"></script> <script id="metamorph-9-end" type="text/x-placeholder"></script> </div> So my question is this: 1. How can I tell css to ignore script tags? or 2. How can I edit the css bindings so that they skip over script tags when selecting the first or first child? or 3. How can I structure this so that Ember uses fewer/no metamorph tags? Here is a fiddle: http://jsfiddle.net/skilesare/SgwsJ/

    Read the article

  • recursive delete trigger and ON DELETE CASCADE contraints are not deleting everything

    - by bitbonk
    I have a very simple datamodel that represents a tree structure: The RootEntity is the root of such a tree, it can contain children of type ContainerEntity and of type AtomEntity. The type ContainerEntity again can contain children of type ContainerEntity and of type AtomEntity but can not contain children of type RootEntity. Children are referenced in a well known order. The DB model for this is below. My problem now is that when I delete a RootEntity I want all children to be deleted recursively. I have create foreign key with CASCADE DELETE and two delete triggers for this. But it is not deleting everything, it always leaves some items in the ContainerEntity, AtomEntity, ContainerEntity_Children and AtomEntity_Children tables. Seemling beginning with the recursionlevel of 3. CREATE TABLE RootEntity ( Id UNIQUEIDENTIFIER NOT NULL, Name VARCHAR(500) NOT NULL, CONSTRAINT PK_RootEntity PRIMARY KEY NONCLUSTERED (Id), ); CREATE TABLE ContainerEntity ( Id UNIQUEIDENTIFIER NOT NULL, Name VARCHAR(500) NOT NULL, CONSTRAINT PK_ContainerEntity PRIMARY KEY NONCLUSTERED (Id), ); CREATE TABLE AtomEntity ( Id UNIQUEIDENTIFIER NOT NULL, Name VARCHAR(500) NOT NULL, CONSTRAINT PK_AtomEntity PRIMARY KEY NONCLUSTERED (Id), ); CREATE TABLE RootEntity_Children ( ParentId UNIQUEIDENTIFIER NOT NULL, OrderIndex INT NOT NULL, ChildContainerEntityId UNIQUEIDENTIFIER NULL, ChildAtomEntityId UNIQUEIDENTIFIER NULL, ChildIsContainerEntity BIT NOT NULL, CONSTRAINT PK_RootEntity_Children PRIMARY KEY NONCLUSTERED (ParentId, OrderIndex), -- foreign key to parent RootEntity CONSTRAINT FK_RootEntiry_Children__RootEntity FOREIGN KEY (ParentId) REFERENCES RootEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) ContainerEntity CONSTRAINT FK_RootEntiry_Children__ContainerEntity FOREIGN KEY (ChildContainerEntityId) REFERENCES ContainerEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) AtomEntity CONSTRAINT FK_RootEntiry_Children__AtomEntity FOREIGN KEY (ChildAtomEntityId) REFERENCES AtomEntity (Id) ON DELETE CASCADE, ); CREATE TABLE ContainerEntity_Children ( ParentId UNIQUEIDENTIFIER NOT NULL, OrderIndex INT NOT NULL, ChildContainerEntityId UNIQUEIDENTIFIER NULL, ChildAtomEntityId UNIQUEIDENTIFIER NULL, ChildIsContainerEntity BIT NOT NULL, CONSTRAINT PK_ContainerEntity_Children PRIMARY KEY NONCLUSTERED (ParentId, OrderIndex), -- foreign key to parent ContainerEntity CONSTRAINT FK_ContainerEntity_Children__RootEntity FOREIGN KEY (ParentId) REFERENCES ContainerEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) ContainerEntity CONSTRAINT FK_ContainerEntity_Children__ContainerEntity FOREIGN KEY (ChildContainerEntityId) REFERENCES ContainerEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) AtomEntity CONSTRAINT FK_ContainerEntity_Children__AtomEntity FOREIGN KEY (ChildAtomEntityId) REFERENCES AtomEntity (Id) ON DELETE CASCADE, ); CREATE TRIGGER Delete_RootEntity_Children ON RootEntity_Children FOR DELETE AS DELETE FROM ContainerEntity WHERE Id IN (SELECT ChildContainerEntityId FROM deleted) DELETE FROM AtomEntity WHERE Id IN (SELECT ChildAtomEntityId FROM deleted) GO CREATE TRIGGER Delete_ContainerEntiy_Children ON ContainerEntity_Children FOR DELETE AS DELETE FROM ContainerEntity WHERE Id IN (SELECT ChildContainerEntityId FROM deleted) DELETE FROM AtomEntity WHERE Id IN (SELECT ChildAtomEntityId FROM deleted) GO

    Read the article

  • Filtering documents against a dictionary key in MongoDB

    - by Thomas
    I have a collection of articles in MongoDB that has the following structure: { 'category': 'Legislature', 'updated': datetime.datetime(2010, 3, 19, 15, 32, 22, 107000), 'byline': None, 'tags': { 'party': ['Peter Hoekstra', 'Virg Bernero', 'Alma Smith', 'Mike Bouchard', 'Tom George', 'Rick Snyder'], 'geography': ['Michigan', 'United States', 'North America'] }, 'headline': '2 Mich. gubernatorial candidates speak to students', 'text': [ 'BEVERLY HILLS, Mich. (AP) \u2014 Two Democratic and Republican gubernatorial candidates found common ground while speaking to private school students in suburban Detroit', "Democratic House Speaker state Rep. Andy Dillon and Republican U.S. Rep. Pete Hoekstra said Friday a more business-friendly government can help reduce Michigan's nation-leading unemployment rate.", "The candidates were invited to Detroit Country Day Upper School in Beverly Hills to offer ideas for Michigan's future.", 'Besides Dillon, the Democratic field includes Lansing Mayor Virg Bernero and state Rep. Alma Wheeler Smith. Other Republicans running are Oakland County Sheriff Mike Bouchard, Attorney General Mike Cox, state Sen. Tom George and Ann Arbor business leader Rick Snyder.', 'Former Republican U.S. Rep. Joe Schwarz is considering running as an independent.' ], 'dateline': 'BEVERLY HILLS, Mich.', 'published': datetime.datetime(2010, 3, 19, 8, 0, 31), 'keywords': "Governor's Race", '_id': ObjectId('4ba39721e0e16cb25fadbb40'), 'article_id': 'urn:publicid:ap.org:0611e36fb084458aa620c0187999db7e', 'slug': "BC-MI--Governor's Race,2nd Ld-Writethr" } If I wanted to write a query that looked for all articles that had at least 1 geography tag, how would I do that? I have tried writing db.articles.find( {'tags': 'geography'} ), but that doesn't appear to work. I've also thought about changing the search parameter to 'tags.geography', but am having a devil of a time figuring out what the search predicate would be.

    Read the article

< Previous Page | 654 655 656 657 658 659 660 661 662 663 664 665  | Next Page >