Search Results

Search found 33139 results on 1326 pages for 'embedded database'.

Page 651/1326 | < Previous Page | 647 648 649 650 651 652 653 654 655 656 657 658  | Next Page >

  • transforming binary data using ssis and sql server 2008

    - by Rick
    Hello All - I have a task to import/transform and extract zipped binary files that contain both text data as well as embeded binary data. Within the data is data that is relational in nature and needs to be processed into a defined database structure. Currently I have a C# single threaded app that essentially grabs all the files from the directory (currently there is 13K files of varying sizes) and extracts the data on a single thread line by line inserts to the database. As you could imagine this is a very slow process and unacceptable. There are several different parsing routines used depending on the header record in the file. There are potentially upto a million rows per file when all the data is extracted to the row level of detail. Follow on task is to parse those rows into their appropriate tables based on is content. i.e. the textual content has to be parsed further into "buckets" of like data in the database. That about sums up the big picture. Now for the problem task list. How do i iterate through a packet of data using SSIS? In the app the file is decompressed and then is parsed using streams data type and byte arrays and is routed to the required parsing routine based on the header data of each packet. There is bit swapping involved as well. Should i wrap up the app code into a script task(s) and let it do the custom processing? The data is seperated by year and the sql server tables is partitioned by year as well. I need to be able to "catch" bad file data as well and process by hand most likely. Should i simply load the zipped file to sql as a blob and parse the file with T-SQL? Would that be multi threaded if done that way? Not sure how to do the parsing in tsql that is involved here. Which do you think would be faster? Potentially the data that is currently processed via files could come to us via a socket. Can SSIS collect that data in real time? How would i go about setting that up? Processing these new files from the directorys will become a daily task. I can manage the data once i get it to sql server. Getting it there in a timely fashion seems to be the long pole in the tent for me. I would appreciate any comments or suggestions from the group. Rick

    Read the article

  • Most elegant way to morph this sequence

    - by Ed Woodcock
    Hi folks: I've got the Day of the week stored in a database table (that I do not control), and I need to use it in my code. Problem is, I want to use the System.DayOfWeek enum for representation for this, and the sequences are not the same. In the database, it's as follows: 1 2 3 4 5 6 7 S M T W T F S I need it as follows: 0 1 2 3 4 5 6 M T W T F S S What's the most elegant way to do this? for example, I could do: i = dayOfWeek; i = i - 2; if (i < 0) { i = 6; } but that's a bit inelegant. Any suggestions?

    Read the article

  • Mixing Transaction Script pattern with DDD/CQRS

    - by Herman
    Hi all, Here is the situation, in order to support our legacy system, we need to insert to a table whenever a user logs in. This is basically an CRUD operation, so it doesn't really make sense to create repository/entity/command/event for this since this doesn't tie to any business rules at all. The only benefit to create a CQRS command is that this database write can happen asynchronously under that model. Which is a better route to take? Use CQRS, and then call a stored proc. when handling that command? Just call database directly in the controller (I am using asp.net mvc)

    Read the article

  • samba windows gvim & "READ ERRORS"

    - by l.thee.a
    I have a samba server on my embedded box. I was planning to use it to edit files easily. However when I open files on my windows machine, I get read errors. I have tried textpad, gvim and kate(andLinux). On gvim I first get "write error in swap file" and it is replaced by "[READ ERRORS]". If I access the same file through pyNeighborhood (ubuntu) I can read/write/delete etc. Samba security is set to share and guest user is root. Also if I move the file to my ubuntu samba server, I get no problems. Any ideas? PS: I am using smbd 3.0.25b.

    Read the article

  • Simulating C-style for loops in python

    - by YGA
    (even the title of this is going to cause flames, I realize) Python made the deliberate design choice to have the for loop use explicit iterables, with the benefit of considerably simplified code in most cases. However, sometimes it is quite a pain to construct an iterable if your test case and update function are complicated, and so I find myself writing the following while loops: val = START_VAL while <awkward/complicated test case>: # do stuff ... val = <awkward/complicated update> The problem with this is that the update is at the bottom of the while block, meaning that if I want to have a continue embedded somewhere in it I have to: use duplicate code for the complicated/awkard update, AND run the risk of forgetting it and having my code infinite loop I could go the route of hand-rolling a complicated iterator: def complicated_iterator(val): while <awkward/complicated test case>: yeild val val = <awkward/complicated update> for val in complicated_iterator(start_val): if <random check>: continue # no issues here # do stuff This strikes me as waaaaay too verbose and complicated. Do folks in stack overflow have a simpler suggestion?

    Read the article

  • What is the best PHP framework for building a website around a heavily relational PostgreSQL databas

    - by Kenaniah
    First of all, the framework of choice needs to have excellent support for PostgreSQL. I don't care about MySQL because it doesn't have half of the features the application I will be porting requires. (And when I say excellent support, I mean that their approach to database drivers has not been solely trained in MySQL). The ideal framework: Should take full advantage of PHP 5.3 and PostgreSQL 8.4 features Should support new technologies such as OpenID and social networking Should support complex relationships between database relations Should have an intelligent validation system Should have a basic library of helpful views (such as pagination, navigation, etc) Should probably be MVC based Should have excellent documentation and an active development community Should namespace classes intelligently What I'm looking for might be more of a library of utilities, as I really don't want to be restricted by the framework in what I can and can not do. I have my own small library of core classes that take care of business logic, and I will most likely want to integrate those with the new framework as well. Thanks!

    Read the article

  • Is javac enough to build an OSGi bundle?

    - by Chatanga
    To produce a bundle from source using a tool such as javac, you need to provide it with a linear classpath. Unfortunately, it won't work in some situations still perfectly legal from an OSGi point of view: dependencies with embedded JAR in them; same packages contained by different dependencies. Since javac doesn't understand OSGi metadata, I won't be able to simply but the dependencies in a classpath. A finer package grained approach seems necessary. How this problem is addressed by people using OSGi in an automated process (continuous integration)? Strangely, there is a lot of resources on the web on how to create bundle JAR (creating the metadata, creating the JAR) provided you have the classes/inner JAR to put inside, but very few things on how actually get theses classes compiled.

    Read the article

  • Entity Framework doesn't like 0..1 to * relationships.

    - by Orion Adrian
    I have a database framework where I have two tables. The first table has a single column that is an identity and primary key. The second table contains two columns. One is a nvarchar primary key and the other is a nullable foreign key to the first table. On the default import of the database I get the following error: Condition cannot be specified for Column member 'ForeignKeyId' because it is marked with a 'Computed' or 'Identity' StoreGeneratedPattern. where ForeignKeyId is the second foreign key reference in the second table. Is this just something the entity model doesn't do? Or am I missing something?

    Read the article

  • Can SQLAlchemy DateTime Objects Only Be Naive?

    - by Sean M
    I am working with SQLAlchemy, and I'm not yet sure which database I'll use under it, so I want to remain as DB-agnostic as possible. How can I store a timezone-aware datetime object in the DB without tying myself to a specific database? Right now, I'm making sure that times are UTC before I store them in the DB, and converting to localized at display-time, but that feels inelegant and brittle. Is there a DB-agnostic way to get a timezone-aware datetime out of SQLAlchemy instead of getting naive datatime objects out of the DB?

    Read the article

  • where should I put the EF entity and data annotations in asp.net mvc + entity framework project

    - by giddy
    So I have a DataEntity class generated by EntityFramework4 for my sqlexpress08 database. This data context is exposed via a WCF Data Service/Odata to silverlight and win forms clients. Should the data entities + edmx file (generated by EF4) go in a separate class library? The problem here then is I would specify data annotations for a few entities and then some of them would require specific MVC attributes (like CompareAttribute) so the class library would also reference mvc dlls. There also happen to be entity users which will be encapsulated or wrapped into an IIdentity in the website. So its pretty tied to the mvc website. Or Should it maybe go in a Base folder in the mvc project itself? Mostly the website is data driven around the database, like approve users, change global settings etc. The real business happens in the silverlight and win forms apps. Im using mvc3 rc2 with Razor. Thanks

    Read the article

  • Where does C# and the .NET Framework fail?

    - by Nate Bross
    In my non-programming life, I always attempt to use the approprite tool for the job, and I feel that I do the same in my programming life, but I find that I am choosing C# and .NET for almost everything. I'm finding it hard to come up with (realistic business) needs that cannot be met by .NET and C#. Obviously embedded systems might require something less bloated than the .NET Micro Framework, but I'm really looking for line of business type situations where .NET is not the best tool. I'm primarly a C# and .NET guy since its what I'm the most comfertable in, but I know a fair amount of C++, php, VB, powershell, batch files, and Java, as well as being versed in the web technologes (javascript, html/css). But I'm open minded about it my skill set and I'm looking for cases where C# and .NET are not the right tool for the job. The bottom line here, is that I feel that I'm choosing C# and .NET simply because I am very comfertable with it, so I'm looking for cases where you have chosen something other than .NET, even though you are primarly a .NET developer.

    Read the article

  • What messaging technologies in windows-ce for gauranteed msg delivery?

    - by Aidanapword
    All, We are building a windows-ce (6.0R3) based device that requires guaranteed and audit-ready message delivery (including store & forward) up to and down from the cloud. I have been looking for choices beyond: MSMQ a proprietary solution (what our prototype device is using) AMQP (research on using this in our context is now starting) ... are there any others? We will be transporting sensitive data (who isn't?!?!) over a public network, and large scale options are required. Anything running on an embedded device will be performance sensitive too. Thanks! Aidanapword

    Read the article

  • OOP function and if statement

    - by Luke
    Not sure if I can ask two questions? If i run the following function in my database class function generateUserArray() { $u = array(); $result = $this->selectAllUsers(); while( $row=mysql_fetch_assoc($result)) { $u[] = $row['username']; } return $u; } Would i call it like this? $u[] = $datebase->generateUserArray(); My second question, will this work: else if($database->addLeagueInformation($subname, $subformat, $subgame, $subseason, $subwindow, $subadmin, $subchampion, $subtype) && $databases->addLeagueTable($name) && $_SESSION['players'] == $subplayers && $comp_name = "$format_$game_$name_$season" && $_SESSION['comp_name'] = $comp_name) Thankyou

    Read the article

  • Binary search of unaccesible data field in ldap from python

    - by EricR
    I'm interested in reproducing a particular python script. I have a friend who was accessing an ldap database, without authentication. There was a particular field of interest, we'll call it nin (an integer) for reference, and this field wasn't accessible without proper authentication. However, my friend managed to access this field through some sort of binary search (rather than just looping through integers) on the data; he would check the first digit, check if it was greater or less than the starting value, he would augment that until it returned a true value indicating existence, adding digits and continuing checking until he found the exact value of the integer nin. Any ideas on how he went about this? I've access to a similarly set up database.

    Read the article

  • Querying datatable to get rows with a certain value in the first column

    - by user1776590
    I am currently using the code below and am wondering if it would be possible to query based on an entry value. At the moment it returns the number of rows that have been defined. var q = sqlData.AsEnumerable().Take(2); This data comes in from a database and is imputed into the table but at the moment it only returns database data into the datatable and allows me to select the first two rows and I was wondering if I can query the data table so that I can get the rows that I require based on an index in the actual table itself (e.g. in the table I find five rows and query this information out).

    Read the article

  • Why is django admin not accepting Nullable foreign keys?

    - by p.g.l.hall
    Here is a simplified version of one of my models: class ImportRule(models.Model): feed = models.ForeignKey(Feed) name = models.CharField(max_length=255) feed_provider_category = models.ForeignKey(FeedProviderCategory, null=True) target_subcategories = models.ManyToManyField(Subcategory) This class manages a rule for importing a list of items from a feed into the database. The admin system won't let me add an ImportRule without selecting a feed_provider_category despite it being declared in the model as nullable. The database (SQLite at the moment) even checks out ok: >>> .schema ... CREATE TABLE "someapp_importrule" ( "id" integer NOT NULL PRIMARY KEY, "feed_id" integer NOT NULL REFERENCES "someapp_feed" ("id"), "name" varchar(255) NOT NULL, "feed_provider_category_id" integer REFERENCES "someapp_feedprovidercategory" ("id"), ); ... I can create the object in the python shell easily enough: f = Feed.objects.get(pk=1) i = ImportRule(name='test', feed=f) i.save() ...but the admin system won't let me edit it, of course. How can I get the admin to let me edit/create objects without specifying that foreign key?

    Read the article

  • Query DNSBL or other block lists using PHP

    - by 55skidoo
    Is there any way to use PHP code to query a DNSBL (block list) provider and find out if the IP address submitted is a bad actor? I would like to take an existing IP address out of a registration database, then check whether it's a known block-listed IP address by performing a lookup on it, then if it's a blacklisted, do an action on it (such as, delete entry from registration database). Most of the instructions I have seen assume you are trying to query the blocklist via a mail server, which I can't do. I tried querying via web browser by typing in queries such as "58.64.xx.xxx.dnsbl.sorbs.net" but that didn't work.

    Read the article

  • Error in MySQL Workbench Forward Engineered Stored Procedures

    - by colithium
    I am using MySQL Workbench (5.1.18 OSS rev 4456) to forward engineer a SQL CREATE script. For every stored procedure, the automatic process outputs something like: DELIMITER // USE DB_Name// DB_Name// DROP procedure IF EXISTS `DB_Name`.`SP_Name` // USE DB_Name// DB_Name// CREATE PROCEDURE `DB_Name`.`SP_Name` (id INT) BEGIN SELECT * FROM Table_Name WHERE Id = id; END// The two lines that are simply the database name followed by the delimiter are errors and are reported as such when running the script. As long as they are ignored, it looks like everything gets created just fine. But why would it add those lines? I am creating the database in the WAMP environment which uses MySQL 5.1.36

    Read the article

  • Searching for patterns to create a TCP Connection Pool for high performance messaging

    - by JoeGeeky
    I'm creating a new Client / Server application in C# and expect to have a fairly high rate of connections. That made me think of database connection pools which help mitigate the expense of creating and disposing connections between the client and database. I would like to create a similar capability for my application and haven't been able to find any good examples of how to apply this pattern. Do I really need to spin up an instance of a TcpClient every time I want to send a message to the server and receive a receipt message? Each connection is expected to transport between 1-5KB with each receiving a 1KB response message. I realize this question is somewhat vague, but I am starting from scratch so I am open to suggestions. Even if that means my suppositions are all wrong.

    Read the article

  • Performance of fopen vs stat

    - by Alex Marshall
    Hello, I'm writing several C programs for an embedded system where every bit of performance we can squeeze out will matter. Part of that is accessing log files. When determining if a file exists, is there any performance difference between using open / fopen, and stat ? I've been using stat on the assumption that it only has to do a quick check against the file system, whereas fopen would have to actually gain access to a file and manipulate internal data structures before returning. Is there any merit to this ?

    Read the article

  • SQL Access INSERT INTO Autonumber Field

    - by KrazyKash
    I'm trying to make a visual basic application which is connected to a Microsoft Access Database using OLEDB. Inside my database I have a user table with the following layout ID - Autonumber Username - Text Password - Text Email - Text To insert data into the table I use the following query INSERT INTO Users (Username, Password, Email) VALUES ('004606', 'Password', '[email protected]') However I seem to get an error with this statement and according to VB it's a syntax error. But then I tried to use the following query INSERT INTO Users (Username) Values ('004606') This query seemed to work absolutely fine... So the problem is I can insert into just one field but not all 3 (excluding the ID field because it's an autonumber). Any help would be appreciated, Thanks

    Read the article

  • How would you start automating my job? - Part 2

    - by Jurily
    (Followup to this question) After surviving the first wave of incoming shipments (9 hours of copy/paste), I now believe I have all the requirements. Here is the updated workflow: Monkey collects email attachments (4 Excel spreadsheets, 1 PDF) Monkey creates central database, does complex calculations (right now this is also an Excel spreadsheet) Monkey sends data to two bosses, who set the retail prices independently; first one to reply wins Monkey sends order form to our other warehouses, also Excel Monkey sends spreadsheets to VIP customers, carefully sanitized and formatted (4 different discount categories) Jurily enters the data into the accounting system. I've given up on automating this part, there's too much business logic involved, and the database is a pile of sh^W legacy My question: What technologies would you use for a quick and dirty solution? I'm mostly sold on C#, but coming from a Linux/C++ background, I'm horribly confused about my choices in Microsoft-land. For bonus points: How would you redesign the whole system from the ground up? P.S. in case you were wondering, my job title is System Administrator.

    Read the article

  • MS Access (2010) Enable Design View

    - by Tim GONELLA
    I downloaded the Access template below for doing a home inventory: http://office.microsoft.com/en-us/templates/results.aspx?qu=home%20inventory&ex=1&queryid=0d245f2a%2Dacdc%2D4161%2D92c8%2D8ba16a52ab32&AxInstalled=1&c=0#ai:TC101918100| The design view is not visible, which is a bit of a nuisance. Things I've tried: 1) In options/options/current database/ the check boxes (enable layout view & enable design changes for tables in Datasheet view) are both greyed out. 2) I've unblocked the file using Right-Click-Properties. 3) I've tried copying/exporting the objects to another database. But can only copy/export the tables. 4) I've tried holding shift when opening the DB. 5) Enabling all trust permissions etc. None of these work Does anybody have any suggestions. (I'm using Office 2010) Thanks

    Read the article

  • How can I match at the beginning of any line, including the first, with a Perl regex?

    - by JoelFan
    According the the Perl documentation on regexes: By default, the "^" character is guaranteed to match only the beginning of the string ... Embedded newlines will not be matched by "^" ... You may, however, wish to treat a string as a multi-line buffer, such that the "^" will match after any newline within the string ... you can do this by using the /m modifier on the pattern match operator. The "after any newline" part means that it will only match at the beginning of the 2nd and subsequent lines. What if I want to match at the beginning of any line (1st, 2nd, etc.)? EDIT: OK, it seems that the file has BOM information (3 chars) at the beginning and that's what's messing me up. Any way to get ^ to match anyway? EDIT: So in the end it works (as long as there's no BOM), but now it seems that the Perl documentation is wrong, since it says "after any newline"

    Read the article

  • Using property file in hibernate mapping

    - by Zoltan Hamori
    Hi, I have a two nodes environment using the same database. In the database there is a resource table like RESOURCE_ID, CODE, NODE The content of the NODE column can be 1 or 2 depending on which node can use it. As I need to deploy the same ear to the two nodes, I would like to map this table like this: <hibernate-mapping> <class name="ResourceVO" table="RESOURCE" dynamic-update="true" optimistic-lock="dirty" where="NODE=${node.value}" > I would like to store the node.value property on the file system, so the instances could identify which resource to use. Is it possible in hibernate?

    Read the article

< Previous Page | 647 648 649 650 651 652 653 654 655 656 657 658  | Next Page >