Search Results

Search found 32120 results on 1285 pages for 'django multi table inheri'.

Page 164/1285 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Finding the right terminology for a dictionary table

    - by Karl Forner
    My concern is about what I currently call "dictionary tables", that are database tables containing a list of controlled vocabulary. Let's use an example: Suppose you have a table User containing fields: user_id : primary key first_name last_name user_type_id : foreign key to the UserType table and another table UserType with just two fields: user_type_id : primary key name : the name/value of a particular type of user. For instance, the UserType table may contain (1, Administrator), (2, PowerUser), (3, Normal)... My question is: what is the canonical term for a table like UserType, that only contains a list of (dictinct) words. I want to publish some code that help managing this kind of tables, but first I have to name them ! Thanks for your help. Current state of thought: For now I feel Lookup Tables is a good term. It is also used with the same meaning in these posts: http://dbix-class.35028.n2.nabble.com/RFC-Component-for-Lookup-tables-td3504085.html http://tonyandrews.blogspot.de/2004/10/otlt-and-eav-two-big-design-mistakes.html Lookup Tables Best Practices: DB Tables... or Enumerations The only problem is that lookup table is also sometimes used to name a junction table.

    Read the article

  • Internet Explorer table 1 pixel spacing problem

    - by Dennis G.
    I've found a strange problem with Internet Explorer related to table spacing and cannot find a way to work around it. An empty table results in a single pixel white space with Internet Explorer (6 and 7, 8 not yet tested), while all other browsers ignore the empty table. Here's a picture of the problem: And here is the minimum HTML code to reproduce the issue (please note that there are more margin/padding css attributes and table attributes specified than really needed, I just tested if this fixes IE's behavior): <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <body> <div style="width: 200px; border: 1px black solid"> <table border="0" cellspacing="0" cellpadding="0" style="margin: 0pt; padding: 0pt; border-collapse: collapse;"> <tr> <td style="padding: 0; margin: 0"> </td> </tr> </table> <div style="background: red"> Test </div> </div> </body> </html> I'm not using an empty table as specified in the example above, but this was the minimum code that displays this behavior. Any ideas on how to fix this and remove the white space with IE?

    Read the article

  • Jquery problem - cant expand the row above in a table

    - by apg1985
    Hi People, What I cant figure out is how I would toggle a row in a table using the one below it. So say I have a table with 2 rows the first contains content and the one below contains a button, when the page loads the content row is hidden and when you click the button it toggles the content row on and off. In the example the first table works but the second does not, I need the second one to work. <!DOCTYPE HTML> <html> <head> <title>Testing Horizontal Accordion</title> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript"> $(document).ready(function() { $(".sectionhead").toggle( function() { $(this).next("tr").hide(); }, function() { $(this).next("tr").show(); } ) }); </script> </head> <body> <table> <tr class="sectionhead"><td></td></tr> <tr class="child"><td>child</td></tr> </table> <br> <table> <tr class="child"><td>child</td></tr> <tr class="sectionhead"><td></td></tr> </table> </body> </html>

    Read the article

  • SQL: Order randomly when inserting objects to a table

    - by Ekaterina
    I have an UDF that selects top 6 objects from a table (with a union - code below) and inserts it into another table. (btw SQL 2005) So I paste the UDF below and what the code does is: selects objects for a specific city and add a level to those (from table Europe) union that selection with a selection from the same table for objects that are from the same country and add a level to those From the union, selection is made to get top 6 objects, order by level, so the objects from the same city will be first, and if there aren't any available, then objects from the same country will be returned from the selection. And my problem is, that I want to make a random selection to get random objects from table Europe, but because I insert the result of my selection into a table, I can't use order by newid() or rand() function because they are time-dependent, so I get the following errors: Invalid use of side-effecting or time-dependent operator in 'newid' within a function. Invalid use of side-effecting or time-dependent operator in 'rand' within a function. UDF: ALTER FUNCTION [dbo].[Objects] (@id uniqueidentifier) RETURNS @objects TABLE ( ObjectId uniqueidentifier NOT NULL, InternalId uniqueidentifier NOT NULL ) AS BEGIN declare @city varchar(50) declare @country int select @city = city, @country = country from Europe where internalId = @id insert @objects select @id, internalId from ( select distinct top 6 [level], internalId from ( select top 6 1 as [level], internalId from Europe N4 where N4.city = @city and N4.internalId != @id union select top 6 2 as [level], internalId from Europe N5 where N5.countryId = @country and N5.internalId != @id ) as selection_1 order by [level] ) as selection_2 return END If you have fresh ideas, please share them with me. (Just please, don't suggest to order by newid() or to add a column rand() with seed DateTime (by ms or sthg), because that won't work.)

    Read the article

  • iPhone xcode - Activity Indicator with tab bar controller and multi table view controllers

    - by Frames84
    I've looked for a tutorial and can't seem to find one for a activity indicator in a table view nav bar. in my mainWindow.xib I have a Tab Bar Controller with 4 Tabs controllers each containing a table view. each load JSON feeds using the framework hosted at Google. In one of my View Controller I can add an activity indicator to the nav bar by using: UIActivityIndicatorView *activityIndcator = [[UIActivityIndicatorView alloc] initWithFrame:CGRectMake(0,0,20,20)]; [activityIndcator startAnimating]; UIBarButtonItem *activityItem = [[UIBarButtonItem alloc] initWithCustomView:activityIndcator]; self.navigationItem.rightBarButtonItem = activityItem; however and can turn it off by using: self.navigationItem.rightBarButtonItem.enabled = FALSE; But if i place this in my viewDidLoad event it shows all the time. I only want it to show when I select a row in my table view. so I added it at the top of didSelectRowAtIndexPath and the stop line after I load a feed. it shows but takes a second or two and only shows for about half a second. is the an event that firers before the didSelectRowAtIndexPath event a type of loading event? if not what is the standard menthord for implementing such functionality?

    Read the article

  • JCombobox containing enum values inside a table

    - by Edan
    Hello, I have a class containing Enum with values. (names) In other class I would like to enter inside a table a cell type of JCombobox that will use these enums values. my problem is to combain between string values and the enum. for example the enum class: enum item_Type {entree, main_Meal, Dessert, Drink} for example the table class: setTitle("Add new item" ); setSize(300, 80); setBackground( Color.gray ); // Create a panel to hold all other components topPanel = new JPanel(); topPanel.setLayout( new BorderLayout() ); getContentPane().add( topPanel ); //new JComboBox(item_Type.values()); JComboBox aaa = new JComboBox(); aaa = new JComboBox(item_Type.values()); TableColumn sportColumn = table.getColumnModel().getColumn(2); // Create columns names String columnNames[] = {"Item Description", "Item Type", "Item Price"}; // Create some data String dataValues[][] = {{ "0", aaa, "0" }}; // Create a new table instance table = new JTable( dataValues, columnNames ); // Add the table to a scrolling pane scrollPane = new JScrollPane( table ); topPanel.add( scrollPane, BorderLayout.CENTER ); I know that at the dataValues array I cant use aaa (the enum jcombobox). How can I do that? thanks in advance.

    Read the article

  • Performance impact when using XML columns in a table with MS SQL 2008

    - by Sam Dahan
    I am using a simple table with 6 columns, 3 of which are of XML type, not schema-constrained. When the table reaches a size around 120,000 or 150,000 rows, I see a dramatic performance cost in doing any query in the table. For comparison, I have another table, which grows in size at about the same rate, but only contain scalar types (int, datetime, a few float columns). That table performs perfectly fine even after 200,000 rows. And by the way, I am not using XQuery on the xml columns, i am only using regular SQL query statements. Some specifics: both tables contain a DateTime field called SampleTime. a statement like (it's in a stored procedure but I show you the actual statement) SELECT MAX(sampleTime) SampleTime FROM dbo.MyRecords WHERE PlacementID=@somenumber takes 0 seconds on the table without xml columns, and anything from 13 to 20 seconds on the table with XML columns. That depends on which drive I set my database on. At the moment it sits on a different spindle (not C:) and it takes 13 seconds. Has anyone seen this behavior before, or have any hint at what I am doing wrong? I tried this with SQL 2008 EXPRESS and the full-blown SQL Server 2008, that made no difference. Oh, one last detail: I am doing this from a C# application, .NET 3.5, using SqlConnection, SqlReader, etc.. I'd appreciate some insight into that, thanks! Sam

    Read the article

  • NHibernate / Fluent - Mapping multiple objects to single lookup table

    - by Al
    Hi all I am struggling a little in getting my mapping right. What I have is a single self joined table of look up values of certain types. Each lookup can have a parent, which can be of a different type. For simplicities sake lets take the Country and State example. So the lookup table would look like this: Lookups Id Key Value LookupType ParentId - self joining to Id base class public class Lookup : BaseEntity { public Lookup() {} public Lookup(string key, string value) { Key = key; Value = value; } public virtual Lookup Parent { get; set; } [DomainSignature] [NotNullNotEmpty] public virtual LookupType LookupType { get; set; } [NotNullNotEmpty] public virtual string Key { get; set; } [NotNullNotEmpty] public virtual string Value { get; set; } } The lookup map public class LookupMap : IAutoMappingOverride<DBLookup> { public void Override(AutoMapping<Lookup> map) { map.Table("Lookups"); map.References(x => x.Parent, "ParentId").ForeignKey("Id"); map.DiscriminateSubClassesOnColumn<string>("LookupType").CustomType(typeof(LookupType)); } } BASE SubClass map for subclasses public class BaseLookupMap : SubclassMap where T : DBLookup { protected BaseLookupMap() { } protected BaseLookupMap(LookupType lookupType) { DiscriminatorValue(lookupType); Table("Lookups"); } } Example subclass map public class StateMap : BaseLookupMap<State> { protected StateMap() : base(LookupType.State) { } } Now I've almost got my mappings set, however the mapping is still expecting a table-per-class setup, so is expecting a 'State' table to exist with a reference to the states Id in the Lookup table. I hope this makes sense. This doesn't seem like an uncommon approach when wanting to keep lookup-type values configurable. Thanks in advance. Al

    Read the article

  • Multi-Column Join in Hibernate/JPA Annotations

    - by bowsie
    I have two entities which I would like to join through multiple columns. These columns are shared by an @Embeddable object that is shared by both entities. In the example below, Foo can have only one Bar but Bar can have multiple Foos (where AnEmbeddableObject is a unique key for Bar). Here is an example: @Entity @Table(name = "foo") public class Foo { @Id @Column(name = "id") @GeneratedValue(generator = "seqGen") @SequenceGenerator(name = "seqGen", sequenceName = "FOO_ID_SEQ", allocationSize = 1) private Long id; @Embedded private AnEmbeddableObject anEmbeddableObject; @ManyToOne(targetEntity = Bar.class, fetch = FetchType.LAZY) @JoinColumns( { @JoinColumn(name = "column_1", referencedColumnName = "column_1"), @JoinColumn(name = "column_2", referencedColumnName = "column_2"), @JoinColumn(name = "column_3", referencedColumnName = "column_3"), @JoinColumn(name = "column_4", referencedColumnName = "column_4") }) private Bar bar; // ... rest of class } And the Bar class: @Entity @Table(name = "bar") public class Bar { @Id @Column(name = "id") @GeneratedValue(generator = "seqGen") @SequenceGenerator(name = "seqGen", sequenceName = "BAR_ID_SEQ", allocationSize = 1) private Long id; @Embedded private AnEmbeddableObject anEmbeddableObject; // ... rest of class } Finally the AnEmbeddedObject class: @Embeddable public class AnEmbeddedObject { @Column(name = "column_1") private Long column1; @Column(name = "column_2") private Long column2; @Column(name = "column_3") private Long column3; @Column(name = "column_4") private Long column4; // ... rest of class } Obviously the schema is poorly normalised, it is a restriction that AnEmbeddedObject's fields are repeated in each table. The problem I have is that I receive this error when I try to start up Hibernate: org.hibernate.AnnotationException: referencedColumnNames(column_1, column_2, column_3, column_4) of Foo.bar referencing Bar not mapped to a single property I have tried marking the JoinColumns are not insertable and updatable, but with no luck. Is there a way to express this with Hibernate/JPA annotations? Thank you.

    Read the article

  • Shaping EF LINQ Query Results Using Multi-Table Includes

    - by sisdog
    I have a simple LINQ EF query below using the method syntax. I'm using my Include statement to join four tables: Event and Doc are the two main tables, EventDoc is a many-to-many link table, and DocUsage is a lookup table. My challenge is that I'd like to shape my results by only selecting specific columns from each of the four tables. But, the compiler is giving a compiler is giving me the following error: 'System.Data.Objects.DataClasses.EntityCollection does not contain a definition for "Doc' and no extension method 'Doc' accepting a first argument of type 'System.Data.Objects.DataClasses.EntityCollection' could be found. I'm sure this is something easy but I'm not figuring it out. I haven't been able to find an example of someone using the multi-table include but also shaping the projection. Thx,Mark var qry= context.Event .Include("EventDoc.Doc.DocUsage") .Select(n => new { n.EventDate, n.EventDoc.Doc.Filename, //<=COMPILER ERROR HERE n.EventDoc.Doc.DocUsage.Usage }) .ToList(); EventDoc ed; Doc d = ed.Doc; //<=NO COMPILER ERROR SO I KNOW MY MODEL'S CORRECT DocUsage du = d.DocUsage;

    Read the article

  • Output columns not in destination table?

    - by lance
    SUMMARY: I need to use an OUTPUT clause on an INSERT statement to return columns which don't exist on the table into which I'm inserting. If I can avoid it, I don't want to add columns to the table to which I'm inserting. DETAILS: My FinishedDocument table has only one column. This is the table into which I'm inserting. FinishedDocument -- DocumentID My Document table has two columns. This is the table from which I need to return data. Document -- DocumentID -- Description The following inserts one row into FinishedDocument. Its OUTPUT clause returns the DocumentID which was inserted. This works, but it doesn't give me the Description of the inserted document. INSERT INTO FinishedDocument OUTPUT INSERTED.DocumentID SELECT DocumentID FROM Document WHERE DocumentID = @DocumentID I need to return from the Document table both the DocumentID and the Description of the matching document from the INSERT. What syntax do I need to pull this off? I'm thinking it's possible only with the one INSERT statement, by tweaking the OUTPUT clause (in a way I clearly don't understand)? Is there a smarter way that doesn't resemble the path I'm going down here? EDIT: SQL Server 2005

    Read the article

  • checking is username exists on two tables PHP PDO?

    - by PHPLOVER
    Me again. I have a users table and a users_banlist table. On my registration form i want to check all in one query whether the username someone entered on form exists in the users table and see if it also exists on the users_banlist table. I can do them on there own in individual queries but would rather do it all in one. Here is what i got, but even thou i enter a username that is taken it does not tell me its already taken. $stmt = $dbh->prepare(" SELECT users.user_login, users_banlist.user_banlist FROM users , users_banlist WHERE users.user_login = ? OR users_banlist.user_banlist = ?"); // checker if username exists in users table or users_banlist table $stmt->execute(array($username, $username)); if ( $stmt->rowCount() > 0 ) { $error[] = 'Username already taken'; } Basically i think it is something to do with the execute or rowCount(), could anyone tell me where i am going wrong ? being new to pdo im finding it a little confusing at the moment until i get my pdo book delivered to learn pdo. Thank you as always, phplover

    Read the article

  • Testing for Adjacent Cells In a Multi-level Grid

    - by Steve
    I'm designing an algorithm to test whether cells on a grid are adjacent or not. The catch is that the cells are not on a flat grid. They are on a multi-level grid such as the one drawn below. Level 1 (Top Level) | - - - - - | | A | B | C | | - - - - - | | D | E | F | | - - - - - | | G | H | I | | - - - - - | Level 2 | -Block A- | -Block B- | | 1 | 2 | 3 | 1 | 2 | 3 | | - - - - - | - - - - - | | 4 | 5 | 6 | 4 | 5 | 6 | ... | - - - - - | - - - - - | | 7 | 8 | 9 | 7 | 8 | 9 | | - - - - - | - - - - - | | -Block D- | -Block E- | | 1 | 2 | 3 | 1 | 2 | 3 | | - - - - - | - - - - - | | 4 | 5 | 6 | 4 | 5 | 6 | ... | - - - - - | - - - - - | | 7 | 8 | 9 | 7 | 8 | 9 | | - - - - - | - - - - - | . . . . . . This diagram is simplified from my actual need but the concept is the same. There is a top level block with many cells within it (level 1). Each block is further subdivided into many more cells (level 2). Those cells are further subdivided into level 3, 4 and 5 for my project but let's just stick to two levels for this question. I'm receiving inputs for my function in the form of "A8, A9, B7, D3". That's a list of cell Ids where each cell Id has the format (level 1 id)(level 2 id). Let's start by comparing just 2 cells, A8 and A9. That's easy because they are in the same block. private static RelativePosition getRelativePositionInTheSameBlock(String v1, String v2) { RelativePosition relativePosition; if( v1-v2 == -1 ) { relativePosition = RelativePosition.LEFT_OF; } else if (v1-v2 == 1) { relativePosition = RelativePosition.RIGHT_OF; } else if (v1-v2 == -BLOCK_WIDTH) { relativePosition = RelativePosition.TOP_OF; } else if (v1-v2 == BLOCK_WIDTH) { relativePosition = RelativePosition.BOTTOM_OF; } else { relativePosition = RelativePosition.NOT_ADJACENT; } return relativePosition; } An A9 - B7 comparison could be done by checking if A is a multiple of BLOCK_WIDTH and whether B is (A-BLOCK_WIDTH+1). Either that or just check naively if the A/B pair is 3-1, 6-4 or 9-7 for better readability. For B7 - D3, they are not adjacent but D3 is adjacent to A9 so I can do a similar adjacency test as above. So getting away from the little details and focusing on the big picture. Is this really the best way to do it? Keeping in mind the following points: I actually have 5 levels not 2, so I could potentially get a list like "A8A1A, A8A1B, B1A2A, B1A2B". Adding a new cell to compare still requires me to compare all the other cells before it (seems like the best I could do for this step is O(n)) The cells aren't all 3x3 blocks, they're just that way for my example. They could be MxN blocks with different M and N for different levels. In my current implementation above, I have separate functions to check adjacency if the cells are in the same blocks, if they are in separate horizontally adjacent blocks or if they are in separate vertically adjacent blocks. That means I have to know the position of the two blocks at the current level before I call one of those functions for the layer below. Judging by the complexity of having to deal with mulitple functions for different edge cases at different levels and having 5 levels of nested if statements. I'm wondering if another design is more suitable. Perhaps a more recursive solution, use of other data structures, or perhaps map the entire multi-level grid to a single-level grid (my quick calculations gives me about 700,000+ atomic cell ids). Even if I go that route, mapping from multi-level to single level is a non-trivial task in itself.

    Read the article

  • Best way to get distinct values from large table

    - by derivation
    I have a db table with about 10 or so columns, two of which are month and year. The table has about 250k rows now, and we expect it to grow by about 100-150k records a month. A lot of queries involve the month and year column (ex, all records from march 2010), and so we frequently need to get the available month and year combinations (ie do we have records for april 2010?). A coworker thinks that we should have a separate table from our main one that only contains the months and years we have data for. We only add records to our main table once a month, so it would just be a small update on the end of our scripts to add the new entry to this second table. This second table would be queried whenever we need to find the available month/year entries on the first table. This solution feels kludgy to me and a violation of DRY. What do you think is the correct way of solving this problem? Is there a better way than having two tables?

    Read the article

  • Apache2 Modpython : IOError: Write failed, client closed connection.

    - by llazzaro
    This is the error : [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] mod_python (pid=9528, interpreter='realpage.com', phase='PythonHandler', handler='django.core.handlers.modpython'): Application error [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] ServerName: 'realpage.dom' [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] DocumentRoot: '/htdocs' [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] URI: '/' [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] Location: '/' [Mon Mar 01 12:19:50 2010] [error] [client XXX.XX.248.60] Directory: None [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] Filename: '/htdocs' [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] PathInfo: '/' [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] Traceback (most recent call last): [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1537, in HandlerDispatch\n default=default_handler, arg=req, silent=hlist.silent) [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1229, in _process_target\n result = _execute_target(config, req, object, arg) [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1128, in _execute_target\n result = object(arg) [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] File "/usr/lib/python2.5/site-packages/django/core/handlers/modpython.py", line 228, in handler\n return ModPythonHandler()(req) [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] File "/usr/lib/python2.5/site-packages/django/core/handlers/modpython.py", line 220, in call\n req.write(chunk) [Mon Mar 01 12:19:50 2010] [error] [client XXX.XX.248.60] IOError: Write failed, client closed connection. Please! I am sure you need more information in order to find the bug, please tell me what and how to get it. The error is throwing every time!

    Read the article

  • nginx not serving admin static files?

    - by toto_tico
    First, I want to clarify that this error is just for the admin static files. This means my problem is specific just to the static files that corresponds to the Django admin. The rest of the static files are working perfect. Basically my problem is that for some reason I cannot access those admin static files with the ngix server. It works perfect with the micro server of Django and the collect static is doing its job. This means it is putting the files on the expected place in the static folder. The urls are correct but I cannot even access the admin static files directly, but the others I can. So, for example, I am able to access this url (copying it in the browser): myserver.com:8080/static/css/base/base.css but i am not able to access this other url (copying it in the browser): myserver.com:8080/static/admin/css/admin.css I also tried to copy the admin/ directory structure into other_admin_directory_name/. Then I can access myserver.com:8080/static/other_admin_directory_name/css/admin.css Then, it works. So, I checked permissions and everything is fine. I tried to change ADMIN_MEDIA_PREFIX = '/static/admin/' to ADMIN_MEDIA_PREFIX = '/static/other_admin_directory_name/', it doesn't work. This a mistery in itself that I am exploring but still no luck. Finally, and it seems to be an important clue: I tried to copy the admin/ directory structure into admin_and_then_any_suffix/. Then I cannot access myserver.com:8080/static/admin_and_then_any_suffix//css/admin.css So, if the name of the directory starts with admin (for example administration or admin2) it doesn't work. * added thanks to sarnold observation ** the problem seems to be in the nginx configuration file /etc/nginx/sites-available/mysite location /static/admin { alias /home/vl3/.virtualenvs/vl3/lib/python2.7/site-packages/django/contrib/admin/media/; }

    Read the article

  • Celery daemon as a Ubuntu service does not consume tasks while running from terminal does

    - by Guy
    On Ubuntu 11.10, I have to issue python tasks from django using celery. I'm currently testing on the same machine but eventually the celery worker should run on a remote machine. django uses the following settings: BROKER_HOST = "127.0.0.1" BROKER_PORT = 5672 BROKER_VHOST = "/my_vhost" BROKER_USER = "celery" BROKER_PASSWORD = "celery" I can also see my task queued in http://localhost:55672/#/queues the celery daemon uses the following configuration (celeryconfig.py): BROKER_HOST = "127.0.0.1" BROKER_PORT = 5672 BROKER_USER = "celery" BROKER_PASSWORD = "celery" BROKER_VHOST = "/my_vhost" CELERY_RESULT_BACKEND = "amqp" import os import sys sys.path.append(os.getcwd()) CELERY_IMPORTS = ("tasks", ) running celeryd -l info works well and now I want to run it as a service. I've followed the instructions from http://ask.github.com/celery/cookbook/daemonizing.html and now I'm trying to run it using: sudo /etc/init.d/celeryd start But the message is not being consumed, no error in the celery log either. /etc/default/celeryd CELERYD_NODES="w1" CELERYD_CHDIR="/path/to/django/project" CELERYD_OPTS="--time-limit=300 --concurrency=1" CELERY_CONFIG_MODULE="celeryconfig" # %n will be replaced with the nodename. CELERYD_LOG_FILE="/var/log/celery/%n.log" CELERYD_PID_FILE="/var/run/celery/%n.pid" # Workers should run as an unprivileged user. CELERYD_USER="celery" CELERYD_GROUP="celery" I've also created user celery in Ubuntu not sure if its necessary. Any help will be appreciated, Thanks, Guy

    Read the article

  • MySQL Not Turning On

    - by Shalin Shah
    I have an amazon ec2 instance running on the Amazon Linux AMI and its a micro instance. I wanted to install Django onto my server so I entered these commands wget http://www.mlsite.net/blog/wp-content/uploads/2008/11/go wget http://www.mlsite.net/blog/wp-content/uploads/2008/11/django.conf chmod 744 go ./go So after I was done, I ran sudo service httpd restart and sudo service mysqld restart and This is what came up for mysqld: Stopping mysqld: [ OK ] MySQL Daemon failed to start. Starting mysqld: [FAILED] So I deleted the django files /usr/local/python2.6.8/site-packages/django_registration.egg and I tried finding the error and I found out that in my /etc/my.cnf for the socket, it said socket=/var/lock/subsys/mysql.sock so I went to /var/lock/subsys/ and there was no mysql.sock. I tried creating one using vim but it still didn't work. Then I checked the error log and it said Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) So I am pretty much lost right now. I know it has something to do with mysql.sock If you might know a reason why this was caused could you please let me know? I have a wordpress site on my server, so i kind of need MySQL to work. Thanks!

    Read the article

  • How can a driver change the kernel page table?

    - by Naruto
    I am encountering an issue with kernel memory. When my driver finishes running, the other processes in kernel fail to run, for example, I run ls, the command crashes the kernel with error "Corrupted page table" at a specified address. I do not know whether the page table of my driver relates to the page table of other process. How can my driver changes the page table of the other processes? And how the driver of a process relates to the kernel page table? As I know when the driver runs, it will be switched to kernel context. Kernel has its own page table and the driver has it own one. What is the relation among the kernel page table, the page table of my driver and the page table of the other processes when it runs in kernel context?

    Read the article

  • WPF FlowDocument: force calculation of height etc. "off screen"

    - by Lars
    My target: a DocumentPaginator which takes a FlowDocument with a table, which splits the table to fit the pagesize and repeat the header/footer (special tagged TableRowGroups) on every page. For splitting the table I have to know the heights of its rows. While building the FlowDocument-table by code, the height/width of the TableRows are 0 (of course). If I assign this document to a FlowDocumentScrollViewer (PageSize is set), the heights etc. are calculated. Is this possible without using an UI-bound object? Instantiating a FlowDocumentScrollViewer which is not bound to a window doesn't force the pagination/calculation of the heights. This is how I determine the height of a TableRow (which works perfectly for documents shown by a FlowDocumentScrollViewer): FlowDocument doc = BuildNewDocument(); // what is the scrollviewer doing with the FlowDocument? FlowDocumentScrollViewer dv = new FlowDocumentScrollViewer(); dv.Document = doc; dv.Arrange(new Rect(0, 0, 0, 0)); TableRowGroup dataRows = null; foreach (Block b in doc.Blocks) { if (b is Table) { Table t = b as Table; foreach (TableRowGroup g in t.RowGroups) { if ((g.Tag is String) && ((String)g.Tag == "dataRows")) { dataRows = g; break; } } } if (dataRows != null) break; } if (dataRows != null) { foreach (TableRow r in dataRows.Rows) { double maxCellHeight = 0.0; foreach (TableCell c in r.Cells) { Rect start = c.ElementStart.GetCharacterRect(LogicalDirection.Forward); Rect end = c.ElementEnd.GetNextInsertionPosition(LogicalDirection.Backward).GetCharacterRect(LogicalDirection.Forward); double cellHeight = end.Bottom - start.Top; if (cellHeight > maxCellHeight) maxCellHeight = cellHeight; } System.Diagnostics.Trace.WriteLine("row " + dataRows.Rows.IndexOf(r) + " = " + maxCellHeight); } } Edit: I added the FlowDocumentScrollViewer to my example. The call of "Arrange" forces the FlowDocument to calculate its heights etc. I would like to know, what the FlowDocumentScrollViewer is doing with the FlowDocument, so I can do it without the UIElement. Is it possible?

    Read the article

  • Comments Parent-Child query with indentation

    - by poldoj
    I've been trying to retrieve comments to articles in a pretty common blog fashion way. Here's my sample code: -- ---------------------------- -- Sample Table structure for [dbo].[Comments] -- ---------------------------- CREATE TABLE [dbo].[Comments] ( [CommentID] int NOT NULL , [AddedDate] datetime NOT NULL , [AddedBy] nvarchar(256) NOT NULL , [ArticleID] int NOT NULL , [Body] nvarchar(4000) NOT NULL , [parentCommentID] int NULL ) GO -- ---------------------------- -- Sample Records of Comments -- ---------------------------- INSERT INTO [dbo].[Comments] ([CommentID], [AddedDate], [AddedBy], [ArticleID], [Body], [parentCommentID]) VALUES (N'1', N'2011-11-26 23:18:07.000', N'user', N'1', N'body', null); GO INSERT INTO [dbo].[Comments] ([CommentID], [AddedDate], [AddedBy], [ArticleID], [Body], [parentCommentID]) VALUES (N'2', N'2011-11-26 23:18:50.000', N'user', N'2', N'body', null); GO INSERT INTO [dbo].[Comments] ([CommentID], [AddedDate], [AddedBy], [ArticleID], [Body], [parentCommentID]) VALUES (N'3', N'2011-11-26 23:19:09.000', N'user', N'1', N'body', null); GO INSERT INTO [dbo].[Comments] ([CommentID], [AddedDate], [AddedBy], [ArticleID], [Body], [parentCommentID]) VALUES (N'4', N'2011-11-26 23:19:46.000', N'user', N'3', N'body', null); GO INSERT INTO [dbo].[Comments] ([CommentID], [AddedDate], [AddedBy], [ArticleID], [Body], [parentCommentID]) VALUES (N'5', N'2011-11-26 23:20:16.000', N'user', N'1', N'body', N'1'); GO INSERT INTO [dbo].[Comments] ([CommentID], [AddedDate], [AddedBy], [ArticleID], [Body], [parentCommentID]) VALUES (N'6', N'2011-11-26 23:20:42.000', N'user', N'1', N'body', N'1'); GO INSERT INTO [dbo].[Comments] ([CommentID], [AddedDate], [AddedBy], [ArticleID], [Body], [parentCommentID]) VALUES (N'7', N'2011-11-26 23:21:25.000', N'user', N'1', N'body', N'6'); GO -- ---------------------------- -- Indexes structure for table Comments -- ---------------------------- -- ---------------------------- -- Primary Key structure for table [dbo].[Comments] -- ---------------------------- ALTER TABLE [dbo].[Comments] ADD PRIMARY KEY ([CommentID]) GO -- ---------------------------- -- Foreign Key structure for table [dbo].[Comments] -- ---------------------------- ALTER TABLE [dbo].[Comments] ADD FOREIGN KEY ([parentCommentID]) REFERENCES [dbo]. [Comments] ([CommentID]) ON DELETE NO ACTION ON UPDATE NO ACTION GO I thought I could use a CTE query to do the job like this: WITH CommentsCTE(CommentID, AddedDate, AddedBy, ArticleID, Body, parentCommentID, lvl, sortcol) AS ( SELECT CommentID, AddedDate, AddedBy, ArticleID, Body, parentCommentID, 0, cast(CommentID as varbinary(max)) FROM Comments UNION ALL SELECT P.CommentID, P.AddedDate, P.AddedBy, P.ArticleID, P.Body, P.parentCommentID, PP.lvl+1, CAST(sortcol + CAST(P.CommentID AS BINARY(4)) AS VARBINARY(max)) FROM Comments AS P JOIN CommentsCTE AS PP ON P.parentCommentID = PP.CommentID ) SELECT REPLICATE('--', lvl) + right('>',lvl)+ AddedBy + ' :: ' + Body, CommentID, parentCommentID, lvl FROM CommentsCTE WHERE ArticleID = 1 order by sortcol go but the results have been very disappointing so far, and after days of tweaking I decided to ask for help. I was looking for a method to display hierarchical comments to articles like it happens in blogs. [edit] The problem with this query is that I get duplicates because I couldn't figure out how to properly select the ArticleID which I want comments from to display. I'm also looking for a method that sorts children entries by date within a same level. An example of what I'm trying to accomplish could be something like: (ArticleID[post retrieved]) ------------------------- ------------------------- (Comments[related to the article id above]) first comment[no parent] --[first child to first comment] --[second child to first comment] ----[first child to second child comment to first comment] --[third child to first comment] ----[first child to third child comment to first comment] ------[(recursive child): first child to first child to third child comment to first comment] ------[(recursive child): second child to first child to third child comment to first comment] second comment[no parent] third comment[no parent] --[first child to third comment] I kinda got myself lost in all this mess...I appreciate any help or simpler ways to get this working. Thanks

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Yii: Multi-language website - best practices.

    - by michal
    Hi, I find Yii great framework, and the example website created with yiic shell is a good point to start... however it doesn't cover the topic of multi-language websites, unfortunately. The docs covers the topic of translating short messages, but not keeping the multi-lingual content ... I'm about to start working on a website which needs to be in at least two languages, and I'm wondering what is the best way to keep content for that ... The problem is that the content is mixed extensively with common elements (like embedded video files). I need to avoid duplicating those commons ... so far I used to have an array of arrays containing texts (usually no more than 1-2 short paragraphs), then the view file was just rendering the text from an array. Now I'd like to avoid keeping it in arrays (which requires some attention when putting double quotations " " and is inconvenient in general...). So, what is the best way to keep those short paragraphs? Should I keep them in DB like (id | msg_id | language | content ) and then select them by msg_id & language? That still requires me to create some msg_id's and embed them into view file ... Is there any recommended paradigm for which Yii has some solutions? Thanks, m.

    Read the article

  • C# XMLRPC Multi Threaded newPost something is not right

    - by user1047463
    I recently decided to get my feet wet with c# multi threading and my progress was good until now. My program uses XML-RPC to create new posts on wordpress. When I run my app on the localhost everything works fine and i successfully able to create new posts on wordpress using a multi threaded approach. However when I try to do the same with a wordpress installed on a remote server , new posts won't appear, so when i go to see all posts, the total number of posts remains unchanged. Another thing to note is the function that creates a new post does return the ID of a supposedly created posts. But as I mentioned previously the new posts are not visible when I try to access them in admin menu. Since I'm new to the multithreading I was hoping some of you guys can help me and check if I'm doing anything wrong. Heres the code that creates threads and calls create post function. foreach (DataRow row in dt.Rows) { Thread t = null; t = new Thread(createPost); lock (t) { t.Start(); } } public void createPost() { string postid; try { icp = (IcreatePost)XmlRpcProxyGen.Create(typeof(IcreatePost)); clientProtocol = (XmlRpcClientProtocol)icp; clientProtocol.Url = xmlRpcUrl; postid = icp.NewPost(1, blogLogin, blogPass, newBlogPost, 1); currentThread--; threadCountLabel.Invoke(new MethodInvoker(delegate { threadCountLabel.Text = currentThread.ToString(); })); } catch (Exception ex) { MessageBox.Show( createPost ERROR ->" + ex.Message); } }

    Read the article

  • In .NET, What Is Fastest Way to Initialize Multi-Dimensional Array to Non-Default Value

    - by AMissico
    How do I initialize a multi-dimensional array of a primitive type as fast as possible? I am stuck with using multi-dimensional arrays. My problem is performance. The following routine initializes a 100x100 array in approx. 500 ticks. Removing the int.MaxValue initialization results in approx. 180 ticks just for the looping. Approximately 100 ticks to create the array without looping and without initializing to int.MaxValue. Routines similiar to this are called a few tens-of-thousands to several million times. I am open to suggestions on how to optimize this non-default initialization of an array. One idea I had is to use a smaller primitive type when available. For instance, using byte instead of int, saves 100 ticks. I would be happy with this, but I am hoping that I don't have to change the primitive data type. public int[,] CreateArray(Size size) { int[,] array = new int[size.Width, size.Height]; for (int x = 0; x < size.Width; x++) { for (int y = 0; y < size.Height; y++) { array[x, y] = int.MaxValue; } } return array; }

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >