Search Results

Search found 20859 results on 835 pages for 'little bobby tables'.

Page 110/835 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Call macro from Python script?

    - by Eric
    One of our page templates is made up of a bunch of macros. These items are a bunch of html tables. Now, I want a couple of these tables in a Python script to create a PDF. Is there a way call a macro from a Python script and get back the HTML that is produced? If so, can you explain? Thanks Eric

    Read the article

  • What is the best way to archive data in a relational database?

    - by GenericTypeTea
    I have a bit of an issue with a particular aspect of a program I'm working on. I need the ability to archive (fix) a table so that a change anywhere in the system will not affect the results it returns. This is the basic structure of what I need to fix: Recipe --> Recipe (as sub recipe) Recipe --> Ingredients So, if I fix a Recipe, I need to ensure all the sub recipes (including all the sub recipes sub recipes and so forth) are fixed and all its ingredients are fixed. The problem is that the sub recipe and ingredients still need to be modifiable as they are used by other recipes that are not fixed. I came up with a solution whereby I serialize (with protobuf-net) a master object that deals with the recipe and all the sub recipes and ingredients and save the archive data to a table like follows: Archive{ ReferenceId, (i.e. RecipeId) ReferenceTypeId, (i.e. Recipe) ArchiveData varbinary(max) } Now, this works great and is almost perfect... however I totally forgot (I'd love to blame the agile development mentally, however this was just short sighted) that this information needs to be reported on. As far as I'm aware I can't think how I could inflate the serialized data back into my Recipe Object and use it in a Report. I'm using the standard SQL 2005 report services at the moment. Alternatively, I guess I could do the following: Duplicate every table and tag the word "Archive" on the end of the table name. This would then give me an area of specific archive data... but ignoring my simplified example, there'd actually be about 15 tables duplicated. Add a nullable, non-foreign key property called "CopiedFromId" to every table that contains fixed data and duplicate every record that the recipe (and all it's sub recipes and all their sub recipes) touches. Create some sort of denormalised structure that could be restored from at a later date to the original, unfixed recipe. Although I think this would be like option 1 and involve a lot of extra tables. Anyway, I'm at a total loss and do not like any of the ideas particularly. Can anyone please advise the best course of action? EDIT: Or 4) Create tables specific to what the report requires and populate them with the data when the user clicks the report button? This would cause about 4 extra tables for the report in question.

    Read the article

  • How to remove lowercase sentence fragments from text?

    - by Aaron
    Hello: I'm tyring to remove lowercase sentence fragments from standard text files using regular expresions or a simple Perl oneliner. These are commonly referred to as speech or attribution tags, for example - he said, she said, etc. This example shows before and after using manual deletion: Original: "Ah, that's perfectly true!" exclaimed Alyosha. "Oh, do leave off playing the fool! Some idiot comes in, and you put us to shame!" cried the girl by the window, suddenly turning to her father with a disdainful and contemptuous air. "Wait a little, Varvara!" cried her father, speaking peremptorily but looking at them quite approvingly. "That's her character," he said, addressing Alyosha again. "Where have you been?" he asked him. "I think," he said, "I've forgotten something... my handkerchief, I think.... Well, even if I've not forgotten anything, let me stay a little." He sat down. Father stood over him. "You sit down, too," said he. All lower case sentence fragments manually removed: "Ah, that's perfectly true!" "Oh, do leave off playing the fool! Some idiot comes in, and you put us to shame!" "Wait a little, Varvara!" "That's her character," "Where have you been?" "I think," "I've forgotten something... my handkerchief, I think.... Well, even if I've not forgotten anything, let me stay a little." He sat down. Father stood over him. "You sit down, too," I've changed straight quotes " to balanced and tried: ” (...)+[.] Of course, this removes some fragments but deletes some text in balanced quotes and text starting with uppercase letters. [^A-Z] didn't work in the above expression. I realize that it may be impossible to achieve 100% accuracy but any useful expression, perl, or python script would be deeply appreciated. Cheers, Aaron

    Read the article

  • NHibernate complex one-to-one mapping

    - by Eugene Strizhok
    There is a table A containing common unversioned data for entities. There are also tables B,C,D with versioned data of particular entity type. All of these tables are referencing table A. The task is to add a mapping of a property of entity's type, for example, stored in table B, which would reference table A, and specify a rule how entity should be fetch from table B based on identifier from table A. (For example, to fetch latest version of an entity). It it possible with NHibernate?

    Read the article

  • Crazy python behaviour

    - by Martin
    I have a little piece of python code in the server script for my website which looks a little bit like this: console.append([str(x) for x in data]) console.append(str(max(data))) quite simple, you might think, however the result it's outputting is this: ['3', '12', '3'] 3 for some reason python thinks 3 is the max of [3,12,3]! So am I doing something wrong? Or this is misbehaviour on the part of python?

    Read the article

  • Query string length limit in vba?

    - by datatoo
    I am trying to combine multiple audit tables, and then filter the results into an excel sheet. The Union All and parameters make the query in excess of 1200 characters. It appears the string is truncated when running this. What recommendations can anyone make. I have no control over the database structure and am only reading foxpro free tables. I am permitted table creation but can write into the excel sheet connecting to the datasource Thanks for suggestions

    Read the article

  • Setting up web page width

    - by RPK
    I am new to web-design. I want to set the page-width so that it appears well in a 800x600 resolution screen. I normally use Tables but I read somewhere that excessive use of Tables slows the performance of the website. What other thing I can use and how to set the width?

    Read the article

  • getAudioInputStream can not convert [stereo, 4 bytes/frame] stream to [mono, 2 bytes/frame]

    - by brian_d
    Hello. I am using javasound and have an AudioInputStream of format PCM_SIGNED 8000.0 Hz, 16 bit, stereo, 4 bytes/frame, little-endian Using AudioSystem.getAudioInputStream(target_format, original_stream) produces an 'IllegalArgumentException: Unsupported Conversion' when the target_format is PCM_SIGNED 8000.0 Hz, 16 bit, mono, 2 bytes/frame, little-endian Is it possible to convert this stream manually after every read() call? And if yes, how? In general, how can you compare two formats and tell if a conversion is possible?

    Read the article

  • JSP/Servlet CRUD form with special input field based on join table relationship

    - by user1701467
    There's database with a bunch of tables that are joined with join tables. E.g. Members and Division are joined with MembersDivisions table. JSP/Servlet login functionality has been developed, but now the CRUD functionality needs to be implemented and I was unable to find any tutorials about it, especially involving the join table(when creating new division you can add members to it, or add more to the existing one). Maybe someone could, theoretically, explain how to do the Division part of the CRUD web form?

    Read the article

  • Historical / auditable database

    - by Mark
    Hi all, This question is related to the schema that can be found in one of my other questions here. Basically in my database I store users, locations, sensors amongst other things. All of these things are editable in the system by users, and deletable. However - when an item is edited or deleted I need to store the old data; I need to be able to see what the data was before the change. There are also non-editable items in the database, such as "readings". They are more of a log really. Readings are logged against sensors, because its the reading for a particular sensor. If I generate a report of readings, I need to be able to see what the attributes for a location or sensor was at the time of the reading. Basically I should be able to reconstruct the data for any point in time. Now, I've done this before and got it working well by adding the following columns to each editable table: valid_from valid_to edited_by If valid_to = 9999-12-31 23:59:59 then that's the current record. If valid_to equals valid_from, then the record is deleted. However, I was never happy with the triggers I needed to use to enforce foreign key consistency. I can possibly avoid triggers by using the extension to the "PostgreSQL" database. This provides a column type called "period" which allows you to store a period of time between two dates, and then allows you to do CHECK constraints to prevent overlapping periods. That might be an answer. I am wondering though if there is another way. I've seen people mention using special historical tables, but I don't really like the thought of maintainling 2 tables for almost every 1 table (though it still might be a possibility). Maybe I could cut down my initial implementation to not bother checking the consistency of records that aren't "current" - i.e. only bother to check constraints on records where the valid_to is 9999-12-31 23:59:59. Afterall, the people who use historical tables do not seem to have constraint checks on those tables (for the same reason, you'd need triggers). Does anyone have any thoughts about this? PS - the title also mentions auditable database. In the previous system I mentioned, there is always the edited_by field. This allowed all changes to be tracked so we could always see who changed a record. Not sure how much difference that might make. Thanks.

    Read the article

  • Load parent and child table in one query linq to entitiy

    - by parminder
    Hi Experts, I have a following tables/classes structure in Linq to entities. ` Books { bookId, Title } Tags { TagId Tag } BooksTags { BookId TagId } ' Now I need to write a query which gives me result like this Class Result { bookId, Title Tags } Tags should be comma separated text from the tags table by joining all three tables. How to get it done. Thanks Parminder

    Read the article

  • How do you find the disk size of a Postgres table and its indexes

    - by mmrobins
    I'm coming to Postgres from Oracle and looking for a way to find the table and index size in terms of bytes/MB/GB/etc, or even better the size for all tables. In Oracle I had a nasty long query that looked at user_lobs and user_segments to give back an answer. I assume in Postgres there's something I can use in the information_schema tables, but I'm not seeing where. Thanks in advance.

    Read the article

  • Why doesn't VB.NET 9 have Automatic Properties like C# 3??

    - by Chris Pietschmann
    Would having a nice little feature that makes it quicker to write code like Automatic Properties fit very nicely with the mantra of VB.NET? Something like this would work perfect: Public Property FirstName() As String Get Set End Property UPDATE: VB.NET 10 (coming with Visual Studio 2010 and .NET 4.0) will have Automatic Properties. Here's a link that shows a little info about the feature: http://geekswithblogs.net/DarrenFieldhouse/archive/2008/12/01/new-features-in-vb.net-10-.net-4.0.aspx In VB.NET 10 Automatic Properties will be defines like this: Public Property CustomerID As Integer

    Read the article

  • Automatic database schema generation and migration with Perl

    - by pistacchio
    In Ror or Django or web2py you can "describe" a database (as a set of classes that remaps to tables) and the framework (having being provided with a connection string to the desired database) generates the tables, fields, relations and in the case of RoR and web2py it also keeps it up-to-date (eg, removing a class drops the table, adding a property to the class triggers an "alter table add" etc). Is there any Perl module that does the same? Eg, it takes the YAML/XML/JSON description of a database as input and modifies/generates the database schema accordingly?

    Read the article

  • Fluent NHibernate question

    - by Kevin Pang
    Let's say you have two tables, "Users" and "UserRoles". Here's how the two tables are structured (table - columns): Users - UserID (int) UserRoles - UserID (int), Role (string) What I want is for my "User" class in my domain to have an IList of roles. How do I construct my Fluent NHibernate mapping to achieve this?

    Read the article

  • How to convert EER to SQL Table?

    - by Khajavi
    I have no problem with converting ER to SQL tables, but I don't know how can I convert EER to SQL tables? as you Know that EER has "is a" specification and inheritance, but I don't know how relational databases can connect with inheritance specification

    Read the article

  • Oracle Schema Design: Seperate Schema with I/O Overhead?

    - by Guru
    We are designing database schema for a new system based on Oracle 11gR1. We have identified a main schema which would have close to 100 tables, these will be accessed from the front end Java application. We have a requirement to audit the values which got changed in close to 50 tables, this has to be done every row. Which means, it is possible that, for a single row in MYSYS.T1 there might be 50 (or more) rows in MYSYS_AUDIT.T1_AUD table. We might be having old values of every column entry and new values available from T1. DBA gave an observation, advising against this method, because he said, separate schema meant an extra I/O for every operation. Basically AUDIT schema would be used only to do some analyse and enter values (thus SELECT and INSERT). Is it true that, "a separate schema means an extra I/O" ? I could not find justification. It appears logical to me, as the AUDIT data should not be tampered with, so a separate schema. Also, we designed a separate schema for archiving some tables from MYSYS. From MYSYS_ARC the table might be backed up into tapes or deleted after sufficient time. Few stats: Few tables (close to 20, 30) in MYSYS schema could grow to around 50M rows. We have asked for a total disk space of 4 TB. MYSYS_AUDIT schema might be having 10 times that of MYSYS but we wont keep them more than 3 months. Questions Given all these, can you suggest me any improvements? Separate schema affects disc I/O? (one extra I/O for every schema ?) Any general suggestions? Figure: +-------------------+ +-------------------+ | MYSYS | | MYSYS_AUDIT | | | | | | 1. T1 | | 1. T1_AUD | | 2. T2 | | 2. T2_AUD | | 3. T3 |--------->| 3. T3_AUD | | 4. T4 |(SELECT, | 4. T4_AUD | | . | INSERT) | . | | . | | . | | . | | . | | 100. T100 | | 50. T50_AUD | +-------------------+ +-------------------+ | | | | |(INSERT) | | | * +-------------------+ | MYSYS_ARC | | | | 1. T1_ARC | | 2. T2_ARC | | 3. T3_ARC | | 4. T4_ARC | | . | | . | | . | | 100. T100_ARC | +-------------------+ Apart from this, we have two more schemas with only read only rights, but mainly they are for adhoc purpose and we dont mind the performance on them.

    Read the article

  • What is the best method to call an arbitrary JSON server from .NET (Specifically Windows Phone 7)

    - by davidhayes
    Hi, I have a server that I have no control over, it's JSON based and I've put together a simple proof of concept that calls the server using HTTPWebRequest etc and it works fine (if a little wordy since MS have removed all Synchronous I/O calls). Is there a better way of doing this? I've been looking at WCF as an option but any stable and reasonably performant library should do the job. This is a new area for me so I'm a little unsure what the best practice is (or where to find it out) Thanks in advance Dave

    Read the article

  • Mysql Replication

    - by ychian
    My current database design uses MyIsam mainly as the storage engine, I wonder if its possible to split some of the tables into MyIsam and some into Innodb in the same database. Reason of switching some of the tables to Innodb is because i need row-based locking which Innodb offers. I am not too sure whether this would have any effect on replication?

    Read the article

  • PHP MySQL join table

    - by Jordan Pagaduan
    $sql = "SELECT logs.full_name, logout.status FROM logs, logout WHERE logs.employee_id = logout.employee_id"; tables -- logs logout I'm having error on this. I search join tables in google. And that's what I got. What is wrong with this code?

    Read the article

  • F# for C#/Haskell programmer

    - by Maciej Piechotka
    What is recommended tutorial of F# for Haskell programmer? F# seems to borrow a lot from Haskell but there are little traps which makes hard to write. Generally I need walkthrough the F# which would not explain what is the difference between mutable data and immutable (Haskell is much more strict in this area) etc. I know C# a little so I know more or less what .Net is about as well.

    Read the article

  • Compare 2 database

    - by shantanuo
    I have 2 identical databases. abc15 and abc18. But one of the database has one extra table and I need to find that. I thought the following query should return it, but is it not showing the record that I expect. select * from information_schema.tables as a left join information_schema.tables as b on a.TABLE_SCHEMA=b.TABLE_SCHEMA AND a.TABLE_NAME=b.TABLE_NAME where a.TABLE_SCHEMA = 'abc15' AND b.TABLE_SCHEMA='abc18' and b.TABLE_NAME IS NULL

    Read the article

  • Questions on SQL Server 2008 Full-Text Search

    - by Eddie
    I have some questions about SQL 2K8 integrated full-text search. Say I have the following tables: Car with columns: id (int - pk), makeid (fk), description (nvarchar), year (int), features (int - bitwise value - 32 features only) CarMake with columns: id (int - pk), mfgname (nvarchar) CarFeatures with columns: id (int - 1, 2, 4, 8, etc.), featurename (nvarchar) If someone searches "red honda civic 2002 4 doors", how would I parse the input string so that I could also search in the "CarMake" and "CarFeatures" tables?

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >