Search Results

Search found 5254 results on 211 pages for 'concept analysis'.

Page 164/211 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Check for valid IMEI

    - by Tim
    Hi, does somebody knows how to check for a valid IMEI? I have found a function to check on this page: http://www.dotnetfunda.com/articles/article597-imeivalidator-in-vbnet-.aspx But it returns false for valid IMEI's (f.e. 352972024585360). I can validate them online on this page: http://www.numberingplans.com/?page=analysis&sub=imeinr What is the correct way(in VB.Net) to check if a given IMEI is valid? Regards, Tim PS: This function from above page must be incorrect in some way: Public Shared Function isImeiValid(ByVal IMEI As String) As Boolean Dim cnt As Integer = 0 Dim nw As String = String.Empty Try For Each c As Char In IMEI cnt += 1 If cnt Mod 2 <> 0 Then nw += c Else Dim d As Integer = Integer.Parse(c) * 2 ' Every Second Digit has to be Doubled nw += d.ToString() ' Genegrated a new number with doubled digits End If Next Dim tot As Integer = 0 For Each ch As Char In nw.Remove(nw.Length - 1, 1) tot += Integer.Parse(ch) ' Adding all digits together Next Dim chDigit As Integer = 10 - (tot Mod 10) ' Finding the Check Digit my Finding the Remainder of the sum and subtracting it from 10 If chDigit = Integer.Parse(IMEI(IMEI.Length - 1)) Then ' Checking the Check Digit with the last digit of the Given IMEI code Return True Else Return False End If Catch ex As Exception Return False End Try End Function

    Read the article

  • When should one use the following: Amazon EC2, Google App Engine, Microsoft Azure and Salesforce.com

    - by vicky21
    I am asking this in very general sense. Both from cloud provider and cloud consumer's perspective. Also the question is not for any specific kind of application (in fact the intention is to know which type of applications/domains can fit into which of the cloud slab -SaaS PaaS IaaS). My understanding so far is: IaaS: Raw Hardware (Processors, Networks, Storage). PaaS: OS, System Softwares, Development Framework, Virtual Machines. SaaS: Software Applications. It would be great if Stackoverflower's can share their understanding and experiences of cloud computing concept. EDIT: Ok, I will put it in more specific way - Amazon EC2: You don't have control over hardware layer. But you can take your choice of OS image, Dev Framework (.NET, J2EE, LAMP) and Application and put it on EC2 hardware. Can you deploy an applications built with Google App Engine or Azure on EC2? Google App Engine: You don't have control over hardware and OS and you get a specific Dev Framework to build your application. Can you take any existing Java or Python application and port it to GAE? Or vice versa, can applications that were built on GAE be taken out of GAE and ported to any Application Server like Websphere or Weblogic? Azure: You don't have control over hardware and OS and you get a specific Dev Framework to build your application. Can you take any existing .NET application and port it to Azure? Or vice versa, can applications that were built on Azure be taken out of Azure and ported to any Application Server like Biztalk?

    Read the article

  • Pass table as parameter to SQLCLR TV-UDF

    - by Skeolan
    We have a third-party DLL that can operate on a DataTable of source information and generate some useful values, and we're trying to hook it up through SQLCLR to be callable as a table-valued UDF in SQL Server 2008. Taking the concept here one step further, I would like to program a CLR Table-Valued Function that operates on a table of source data from the DB. I'm pretty sure I understand what needs to happen on the T-SQL side of things; but, what should the method signature look like in the .NET (C#) code? What would be the parameter datatype for "table data from SQL Server?" e.g. /* Setup */ CREATE TYPE InTableType AS TABLE (LocationName VARCHAR(50), Lat FLOAT, Lon FLOAT) GO CREATE TYPE OutTableType AS TABLE (LocationName VARCHAR(50), NeighborName VARCHAR(50), Distance FLOAT) GO CREATE ASSEMBLY myCLRAssembly FROM 'D:\assemblies\myCLR_UDFs.dll' WITH PERMISSION_SET = EXTERNAL_ACCESS GO CREATE FUNCTION GetDistances(@locations InTableType) RETURNS OutTableType AS EXTERNAL NAME myCLRAssembly.GeoDistance.SQLCLRInitMethod GO /* Execution */ DECLARE @myTable InTableType INSERT INTO @myTable(LocationName, Lat, Lon) VALUES('aaa', -50.0, -20.0) INSERT INTO @myTable(LocationName, Lat, Lon) VALUES('bbb', -20.0, -50.0) SELECT * FROM @myTable DECLARE @myResult OutTableType INSERT INTO @myResult MyCLRTVFunction @myTable --returns a table result calculated using the input The lat/lon - distance thing is a silly example that should of course be better handled entirely in SQL; but I hope it illustrates the general intent of table-in - table-out through a table-valued UDF tied to a SQLCLR assembly. I am not certain this is possible; what would the SQLCLRInitMethod method signature look like in the C#? public class GeoDistance { [SqlFunction(FillRowMethodName = "FillRow")] public static IEnumerable SQLCLRInitMethod(<appropriateType> myInputData) { //... } public static void FillRow(...) { //... } } If it's not possible, I know I can use a "context connection=true" SQL connection within the C# code to have the CLR component query for the necessary data given the relevant keys; but that's sensitive to changes in the DB schema. So I hope to just have SQL bundle up all the source data and pass it to the function. Bonus question - assuming this works at all, would it also work with more than one input table?

    Read the article

  • Where should I declare my CDI resources?

    - by Laird Nelson
    JSR-299 (CDI) introduces the (unfortunately named) concept of a resource: http://docs.jboss.org/weld/reference/1.0.0/en-US/html/resources.html#d0e4373 You can think of a resource in this nomenclature as a bridge between the Java EE 6 brand of dependency injection (@EJB, @Resource, @PersistenceContext and the like) and CDI's brand of dependency injection. The general gist seems to be that somewhere (and this will be the root of my question) you declare what amounts to a bridge class: it contains fields annotated both with Java EE's @EJB or @PersistenceContext or @Resource annotations and with CDI's @Produces annotations. The net effect is that Java EE 6 injects a persistence context, say, where it's called for, and CDI recognizes that injected PersistenceContext as a source for future injections down the line (handled by @Inject). My question is: what is the community's consensus--or is there one--on: what this bridge class should be named where this bridge class should live whether it's best to localize all this stuff into one class or make several of them ...? Left to my own devices, I was thinking of declaring a single class called CDIResources and using that as the One True Place to link Java EE's DI with CDI's DI. Many examples do something similar, but I'm not clear on whether they're "just" examples or whether that's a good way to do it. Thanks.

    Read the article

  • Dashboard for collaborative science / data processing projects

    - by rescdsk
    Hi, Continuous Integration servers like Hudson are a pretty amazing addition to software development. I work in an academic research lab, and I'd love to apply similar principles to scientific data analysis. I want a dashboard-like view of which collections of data are fine, which ones are failing their tests (simple shell scripts, mostly), and so on. A lot like the Chromium dashboard (WARNING: page takes a long time to load). It takes work from at least 4 people, and maybe 10 or 12 hours of computer time, to bring our data (from behavioral studies) from its raw form to its final, easily-analyzed form. I've tried Hudson and buildbot, but neither is really appropriate to our workflow. We just want to run a bunch of tests on maybe fifty independent collections of subject data, and display the results nicely. SO! Does anyone have a recommendation of a way to generate this kind of report easily? Or, can you think of a good way to shoehorn this kind of workflow into a continuous integration server? Or, can you recommend a unit testing dashboard that could deal with tests that are little shell scripts rather than little functions? Thank you!

    Read the article

  • sharing build artifacts between jobs in hudson

    - by programming panda
    Hi I'm trying to set up our build process in hudson. Job 1 will be a super fast (hopefully) continuous integration build job that will be built frequently. Job 2, will be responsible for running a comprehensive test suite, at a regular interval or triggered manually. Job 3 will be responsible for running analysis tools across the codebase (much like Job 2). I tried using the "Advanced Projects Options use custom workspace" feature so that code compiled in Job 1 can be used in Job 2 and 3. However, it seems that all build artifacts remain inside that Job 1 workspace. I'm I doing this right? Is there a better way of doing this? I guess I'm looking for something similar to a build pipeline setup...so that things can be shared and the appropriate jobs can be executed in stages. (I also considered using 'batch tasks'...but it seems like those can't be scheduled? only triggered manually?) Any suggestions are welcomed. Thanks!

    Read the article

  • A Question about .net Rfc2898DeriveBytes class?

    - by IbrarMumtaz
    What is the difference in this class? as posed to just using Encoding.ASCII.GetBytes(string object); I have had relative success with either approach, the former is a more long winded approach where as the latter is simple and to the point. Both seem to allow you to do the same thing eventually but I am struggling to the see the point in using the former over the latter. The basic concept I have been able to grasp is that you can convert string passwords into byte arrays to be used for e.g a symmetric encryption class, AesManaged. Via the RFC class but you get to use SaltValues and password when creating your rfc object. I assume its more secure but still thats an uneducated guess at best ! Also that it allows you to return byte arrays of a certain size, well something like that. heres a few examples to show you where I am coming from? byte[] myPassinBytes = Encoding.ASCII.GetBytes("some password"); or string password = "P@%5w0r]>"; byte[] saltArray = Encoding.ASCII.GetBytes("this is my salt"); Rfc2898DeriveBytes rfcKey = new Rfc2898DeriveBytes(password, saltArray); The 'rfcKey' object can now be used towards setting up the the .Key or .IV properties on a Symmetric Encryption Algorithm class. ie. RijndaelManaged rj = new RijndaelManaged (); rj.Key = rfcKey.Getbytes(rj.KeySize / 8); rj.IV = rfcKey.Getbytes(rj.Blocksize / 8); 'rj' should be ready to go ! The confusing part ... so rather than using the 'rfcKey' object can I not just use my 'myPassInBytes' array to help set-up my 'rj' object???? I have tried doing this in VS2008 and the immediate answer is NO ! but have you guys got a better educated answer as to why the RFC class is used over the other alternative I have mentioned above and why????

    Read the article

  • Looking for ideas how to refactor my (complex) algorithm

    - by _simon_
    I am trying to write my own Game of Life, with my own set of rules. First 'concept', which I would like to apply, is socialization (which basicaly means if the cell wants to be alone or in a group with other cells). Data structure is 2-dimensional array (for now). In order to be able to move a cell to/away from a group of another cells, I need to determine where to move it. The idea is, that I evaluate all the cells in the area (neighbours) and get a vector, which tells me where to move the cell. Size of the vector is 0 or 1 (don't move or move) and the angle is array of directions (up, down, right, left). This is a image with representation of forces to a cell, like I imagined it (but reach could be more than 5): Let's for example take this picture: Forces from lower left neighbour: down (0), up (2), right (2), left (0) Forces from right neighbour : down (0), up (0), right (0), left (2) sum : down (0), up (2), right (0), left (0) So the cell should go up. I could write an algorithm with a lot of if statements and check all cells in the neighbourhood. Of course this algorithm would be easiest if the 'reach' parameter is set to 1 (first column on picture 1). But what if I change reach parameter to 10 for example? I would need to write an algorithm for each 'reach' parameter in advance... How can I avoid this (notice, that the force is growing potentialy (1, 2, 4, 8, 16, 32,...))? Can I use specific design pattern for this problem? Also: the most important thing is not speed, but to be able to extend initial logic. Things to take into consideration: reach should be passed as a parameter i would like to change function, which calculates force (potential, fibonacci) a cell can go to a new place only if this new place is not populated watch for corners (you can't evaluate right and top neighbours in top-right corner for example)

    Read the article

  • How to generating comments in hbm2java created POJO?

    - by jschoen
    My current setup using hibernate uses the hibernate.reveng.xml file to generate the various hbm.xml files. Which are then turned into POJOs using hbm2java. We spent some time while designing our schema, to place some pretty decent descriptions on the Tables and there columns. I am able to pull these descriptions into the hbm.xml files when generating them using hbm2jhbmxml. So I get something similar to this: <class name="test.Person" table="PERSONS"> <comment>The comment about the PERSONS table.</comment> <property name="firstName" type="string"> <column name="FIRST_NAME" length="100" not-null="true"> <comment>The first name of this person.</comment> </column> </property> <property name="middleInitial" type="string"> <column name="MIDDLE_INITIAL" length="1"> <comment>The middle initial of this person.</comment> </column> </property> <property name="lastName" type="string"> <column name="LAST_NAME" length="100"> <comment>The last name of this person.</comment> </column> </property> </class> So how do I tell hbm2java to pull and place these comments in the created Java files? I have read over this about editing the freemarker templates to change the way code is generated. I under stand the concept, but it was not to detailed about what else you could do with it beyond there example of pre and post conditions.

    Read the article

  • Can I make a "TCP packet modifier" using tun/tap and raw sockets?

    - by benhoyt
    I have a Linux application that talks TCP, and to help with analysis and statistics, I'd like to modify the data in some of the TCP packets that it sends out. I'd prefer to do this without hacking the Linux TCP stack. The idea I have so far is to make a bridge which acts as a "TCP packet modifier". My idea is to connect to the application via a tun/tap device on one side of the bridge, and to the network card via raw sockets on the other side of the bridge. My concern is that when you open a raw socket it still sends packets up to Linux's TCP stack, and so I couldn't modify them and send them on even if I wanted to. Is this correct? A pseudo-C-code sketch of the bridge looks like: tap_fd = open_tap_device("/dev/net/tun"); raw_fd = open_raw_socket(); for (;;) { select(fds = [tap_fd, raw_fd]); if (FD_ISSET(tap_fd, &fds)) { read_packet(tap_fd); modify_packet_if_needed(); write_packet(raw_fd); } if (FD_ISSET(raw_fd, &fds)) { read_packet(raw_fd); modify_packet_if_needed(); write_packet(tap_fd); } } Does this look possible, or are there other better ways of achieving the same thing? (TCP packet bridging and modification.)

    Read the article

  • DDD principlers and ASP.NET MVC project design

    - by kaivalya
    Two part questions I have a product aggregate that has; Prices PackagingOptions ProductDescriptions ProductImages etc I have modeled one product repository and did not create individual repositories for any of the child classes. All db operations are handled through product repository. Am I understanding the DDD concept correctly so far? Sometimes the question comes to my mind that having a repository for lets say packaging options could make my life easier by directly fetching a the packaging option from the DB by using its ID instead of asking the product repository to find it in its PackagingOptions collection and give it to me.. Second part is managing the edit create operations using ASP.MVC frame work I am currently trying to manage all add edit remove of these child collections of product through product controller(sound right?). One challenge I am now facing is; If I edit a specific packaging option of product through mydomain/product/editpackagingoption/10 I have access to the id of the packaging option But I don't have the ID of the product it self and this forces me to write a query to first find the product that has this specific packaging option then edit that product and the revelant packaging option. I can do this as all packaging option have their unique ID but this would fail if I have collections that don't have unique ID. That feels very wrong.. The next option I thought of is sending both the product and packaging option IDs on the url like; mydomain/product/editpackagingoption/3/10 But I am not sure if that is a good design either. So I am at a point that I am a bit confused. might be having fundamental misunderstandings around all of this... I would appreciate if you bear with the long question and help me put this together. thanks!

    Read the article

  • Code for decoding/encoding a modified base64 URL

    - by Kirk Liemohn
    I want to base64 encode data to put it in a URL and then decode it within my HttpHandler. I have found that Base64 Encoding allows for a '/' character which will mess up my UriTemplate matching. Then I found that there is a concept of a "modified Base64 for URL" from wikipedia: A modified Base64 for URL variant exists, where no padding '=' will be used, and the '+' and '/' characters of standard Base64 are respectively replaced by '-' and '_', so that using URL encoders/decoders is no longer necessary and has no impact on the length of the encoded value, leaving the same encoded form intact for use in relational databases, web forms, and object identifiers in general. Using .NET I want to modify my current code from doing basic base64 encoding and decoding to using the "modified base64 for URL" method. Has anyone done this? To decode, I know it starts out with something like: string base64EncodedText = base64UrlEncodedText.Replace('-', '+').Replace('_', '/'); // Append '=' char(s) if necessary - how best to do this? // My normal base64 decoding now uses encodedText But, I need to potentially add one or two '=' chars to the end which looks a little more complex. My encoding logic should be a little simpler: // Perform normal base64 encoding byte[] encodedBytes = Encoding.UTF8.GetBytes(unencodedText); string base64EncodedText = Convert.ToBase64String(encodedBytes); // Apply URL variant string base64UrlEncodedText = base64EncodedText.Replace("=", String.Empty).Replace('+', '-').Replace('/', '_'); I have seen the Guid to Base64 for URL StackOverflow entry, but that has a known length and therefore they can hardcode the number of equal signs needed at the end.

    Read the article

  • image Gallery in iPhone

    - by Arun Thakkar
    Hello EveryOne!! Hope You all are fine and also in Your best of moods!! I need You Guys help.. I need to Create an Image Gallery!! That may be use Concept of Scrolling and Paging together. When i Click on Button, It will open new view in landscape mode. This View is for my Image Gallery.. It Shows 5 Images 1) - Centered Large Image With its description on Bottom. 2) - Next Coming image on left side, This image is slightly tilled at some angle, Without any description at bottom. 3)next to next coming image on left to 2nd image. 4) - previous image on right side, This image is slightly tilled at some angle, Without any description at bottom. 5)Previous to Previous image on right of 4th image And All images are scrollable.. like When i scroll 2nd image, it Will move to Center and show its description and image Which is already centered move to previous image. Sorry for my Confused English, I have also given link Bellow. kindly Check it. Click here For Showing image. I Tried for Basic code of paging and scrolling but unluckily nothing helpful. Kindly share your knowledge as well as Guide me to develop such image gallery. Looking forwards. Thanks, Regards, Arun Thakkar

    Read the article

  • Add KO "data-bind" attribute on $(document).ready

    - by M.Babcock
    Preface I've rarely ever been a JS developer and this is my first attempt at doing something with Knockout.js. The question to follow likely illustrates both points. Backgound I have a fairly complex MVC3 application that I'm trying to get to work with KO (v2.0.0.0). My MVC app is designed to generically control which fields appear in the view (and how they are added to the view). It makes use of partial views to decide what to draw in the view based on the user's permissions (If the user is in group A then show control A, if the user in group B then show control B or possibly if the user is in group A don't include the control at all). Also, my model is very flat so I'm not sure the built-in ability to apply my ViewModel to a specific portion of the view will help. My solution to this problem is to provide an action in my controller that responds with an object in JSON format with that contains the JQuery selector and the content to assign to the "data-bind" attribute and bind the ViewModel to the View in the $(document).ready event using the values provided. Failed Proof-of-concept My first attempt at proving that this works doesn't actually seem to work, and by "doesn't work" I mean it just doesn't bind the values at all (as can be seen in this jsfiddle). I've tried it with the applyBindings inside of the ready event and not, but it doesn't seem to make any bit of difference. Question What am I doing wrong? Or is this just not something that can work with KO (though I've seen at least one example online doing the same thing and it supposedly works)? Like I said in the preface, I've only ever pretended to be a JS developer (though I've generally gotten it to work in the past) so I'm at a loss where to start trying to figure out what I'm doing wrong. Hopefully this isn't a real noob question.

    Read the article

  • Database warehouse design: fact tables and dimension tables

    - by morpheous
    I am building a poor man's data warehouse using a RDBMS. I have identified the key 'attributes' to be recorded as: sex (true/false) demographic classification (A, B, C etc) place of birth date of birth weight (recorded daily): The fact that is being recorded My requirements are to be able to run 'OLAP' queries that allow me to: 'slice and dice' 'drill up/down' the data and generally, be able to view the data from different perspectives After reading up on this topic area, the general consensus seems to be that this is best implemented using dimension tables rather than normalized tables. Assuming that this assertion is true (i.e. the solution is best implemented using fact and dimension tables), I would like to seek some help in the design of these tables. 'Natural' (or obvious) dimensions are: Date dimension Geographical location Which have hierarchical attributes. However, I am struggling with how to model the following fields: sex (true/false) demographic classification (A, B, C etc) The reason I am struggling with these fields is that: They have no obvious hierarchical attributes which will aid aggregation (AFAIA) - which suggest they should be in a fact table They are mostly static or very rarely change - which suggests they should be in a dimension table. Maybe the heuristic I am using above is too crude? I will give some examples on the type of analysis I would like to carryout on the data warehouse - hopefully that will clarify things further. I would like to aggregate and analyze the data by sex and demographic classification - e.g. answer questions like: How does male and female weights compare across different demographic classifications? Which demographic classification (male AND female), show the most increase in weight this quarter. etc. Can anyone clarify whether sex and demographic classification are part of the fact table, or whether they are (as I suspect) dimension tables.? Also assuming they are dimension tables, could someone elaborate on the table structures (i.e. the fields)? The 'obvious' schema: CREATE TABLE sex_type (is_male int); CREATE TABLE demographic_category (id int, name varchar(4)); may not be the correct one.

    Read the article

  • Perl strings internals

    - by n0rd
    How does perl strings represented internally? What encoding is used? How do I handle different encodings properly? I've been using perl for quite a long time, but it didn't include a lot of string handling in different encodings, and when I encountered a minor problem that had something to do with encodings I usually resorted to some shamanic actions. Until this moment I thought about perl strings as sequences of bytes, which did fit pretty well for my tasks. Now I need to do some processing of UTF-8 encoded file and here starts trouble. First, I read file into string like this: open(my $in, '<', $ARGV[0]) or die "cannot open file $ARGV[0] for reading"; binmode($in, ':utf8'); my $contents; { local $/; $contents = <$in>; } close($in); then simply print it: print $contents; And I get two things: a warning Wide character in print at <scriptname> line <n> and a garbage in console. So I can conclude that perl strings have a concept of "character" that can be "wide" or not, but when printed these "wide" characters are represented in console as multiple bytes, not as single "character". (I wonder now why did all my previous experience with binary files worked quite how I expected it to work without any "character" issues). Why then I see garbage in console? If perl stores strings as character in some known encoding, I don't think there is a big problem to find out console encoding and print text properly. (I use Windows, BTW). If perl stores strings as multibyte sequences (e.g. using same UTF-8 encoding), why is it done this way? From my C experience handling multibyte strings is PAIN.

    Read the article

  • PHP Multiple Calls to Server Share Objects?

    - by user1513171
    I’m wondering this about PHP on Apache. Do multiple calls to the server from different users—could be sitting next to each other, in different states, different countries, etc…—share memory? For example, if I create a static variable in a PHP script and set it to 1 by default, then user1 comes in and it changes to 2, and then almost at the exactly same time, user2 comes in, does he see that static variable with a value of 1 or 2? An even better example is this class I have in PHP: class ApplicationRegistry { private static $instance; private static $PDO; private function __construct() { self::$PDO = $db = new \PDO('mysql:unix_socket=/........'); self::$PDO->setAttribute(\PDO::ATTR_ERRMODE, \PDO::ERRMODE_EXCEPTION); } static function instance() { if(!isset(self::$instance)) { self::$instance = new self(); } return self::$instance; } static function getDSN() { if(!isset(self::$PDO)) { self::instance(); return self::$PDO; } return self::$PDO; } } So this is a Singleton that has a static PDO instance. If user1 and user2 are hitting the server at the exact same time are they using different instances of PDO or are they using the same one? This is a confusing concept for me and I'm trying to think of how my application will scale.

    Read the article

  • sample java code for approximate string matching or boyer-moore extended for approximate string matc

    - by Dolphin
    Hi I need to find 1.mismatch(incorrectly played notes), 2.insertion(additional played), & 3.deletion (missed notes), in a music piece (e.g. note pitches [string values] stored in a table) against a reference music piece. This is either possible through exact string matching algorithms or dynamic programming/ approximate string matching algos. However I realised that approximate string matching is more appropriate for my problem due to identifying mismatch, insertion, deletion of notes. Or an extended version of Boyer-moore to support approx. string matching. Is there any link for sample java code I can try out approximate string matching? I find complex explanations and equations - but I hope I could do well with some sample code and simple explanations. Or can I find any sample java code on boyer-moore extended for approx. string matching? I understand the boyer-moore concept, but having troubles with adjusting it to support approx. string matching (i.e. to support mismatch, insertion, deletion). Also what is the most efficient approx. string matching algorithm (like boyer-moore in exact string matching algo)? Greatly appreciate any insight/ suggestions. Many thanks in advance

    Read the article

  • Oracle rownum in db2 - Java data archiving

    - by HonorGod
    I have a data archiving process in java that moves data between db2 and sybase. FYI - This is not done through any import/export process because there are several conditions on each table that are available on run-time and so this process is developed in java. Right now I have single DatabaseReader and DatabaseWriter defined for each source and destination combination so that data is moved in multiple threads. I guess I wanted to expand this further where I can have Multiple DatabaseReaders and Multiple DatabaseWriters defined for each source and destination combination. So, for example if the source data is about 100 rows and I defined 10 readers and 10 writer, each reader will read 10 rows and give them to the writer. I hope process will give me extreme performance depending on the resources available on the server [CPU, Memory etc]. But I guess the problem is these source tables do not have primary keys and it is extremely difficult to grab rows in multiple sets. Oracle provides rownum concept and i guess the life is much simpler there....but how about db2? How can I achieve this behavior with db2? Is there a way to say fetch first 10 records and then fetch next 10 records and so on? Any suggestions / ideas ? Db2 Version - DB2 v8.1.0.144 Fix Pack Num - 16 Linux

    Read the article

  • multithreading in c#

    - by Lalit Dhake
    Hi, I have console application. In that i have some process that fetch the data from database through different layers ( business and Data access). stores the fetched data in respective objects. Like if data is fetched for student then this data will store (assigned ) to Student object. same for school. and them a delegate call the certain method that generates outputs as per requirement. This process will execute many times say 10 times. Ok? I want to run simultaneously this process. not one will start, it will finish and then second will start. I want after starting 1'st process, just 2'nd , 3rd....10'th must be start. Means it should be multithreading. how can i achieve this ? is that will give me error while connection with data base open and close ? I have tried this concept . but when thread 1'st is starting then data will fetched for thread 1 will stored in its respective (student , school) objects. ok? when simultaneous 2'nd thread starts , but the data is changing of 1'st object ,while control flowing in program. What have to do?

    Read the article

  • ASP.NET Emails blocked by Spam filter on Exchange

    - by Amadiere
    I'm trying to send an email via some C# ASP.NET code. This is being sent to our internal mailrelay server, with our standard "from" address (e.g. [email protected]). In some instances, this is getting through OK, in others, it's getting blocked by the Spam Filter. An example of our Web.config <mailSettings> <smtp from="[email protected]"> <network host="mailrelay.domain.com" defaultCredentials="true" /> </smtp> </mailSettings> I've spoken with our Exchange Server team and they inform me that on occasions, our mail looks sufficiently like spam and is automatically blocked. The algorithm appears to be points based and blocks on a score of 45. 20 points are instantly added because our system is not sending the hostname with the domain name suffixed. e.g. the server is hoping for myServerName.domain.com, but despite being part of that domain, the server is sending from myServerName. I've been asked to look at altering the EHLO string that is sent and/or influencing the host so that it is its fully qualified name. However, this makes little sense to me, and although I understand the concept of what I need to change - I don't know where to begin looking for the fix.

    Read the article

  • Are PyArg_ParseTuple() "s" format specifiers useful in Python 3.x C API?

    - by Craig McQueen
    I'm trying to write a Python C extension that processes byte strings, and I have something basically working for Python 2.x and Python 3.x. For the Python 2.x code, near the start of my function, I currently have a line: if (!PyArg_ParseTuple(args, "s#:in_bytes", &src_ptr, &src_len)) ... I notice that the s# format specifier accepts both Unicode strings and byte strings. I really just want it to accept byte strings and reject Unicode. For Python 2.x, this might be "good enough"--the standard hashlib seems to do the same, accepting Unicode as well as byte strings. However, Python 3.x is meant to clean up the Unicode/byte string mess and not let the two be interchangeable. So, I'm surprised to find that in Python 3.x, the s format specifiers for PyArg_ParseTuple() still seem to accept Unicode and provide a "default encoded string version" of the Unicode. This seems to go against the principles of Python 3.x, making the s format specifiers unusable in practice. Is my analysis correct, or am I missing something? Looking at the implementation for hashlib for Python 3.x (e.g. see md5module.c, function MD5_update() and its use of GET_BUFFER_VIEW_OR_ERROUT() macro) I see that it avoids the s format specifiers, and just takes a generic object (O specifier) and then does various explicit type checks using the GET_BUFFER_VIEW_OR_ERROUT() macro. Is this what we have to do?

    Read the article

  • Source Control Checkin Comments at Top Of Source Files

    - by James Wiseman
    I've noticed a discrepancy with some source files in our system whereby some contain source-control checkin comments, and some do not. These comments are added automatically to the top of the file when it is checked in: * $Log: //vm1/Projects/Morpheus/Sleep.bdy-arc $ -- -- Rev 1.14 Apr 14 2009 15:32:52 John Smith --Fixed bugs 2292 and 2230. This seems to have been quite prevelant in all the compainies with which I have worked, but I must confess that I struggle to see the point. Generally the comments aren't that good, are ofen left by people who have long since departed, and even when they are of a high standard it is difficult to tie them to physical code changes. It also strikes me, that you are physically changing the file that you are checking in. Now, this may not be such a problem with files that will be compiled, but could be a disaster with others, e.g. JavaScript files. So really, my query is what was the motivation in concept behind providing this functionality in the first instance? Does anyone actually find these comments useful? Also, I would be curious to know if this was feature that is commonly supported within Source Control systems. I am aware of it with PVCS, VSS and Subversion (Subversion Keyword Substitution), however I wonder if it is also available in some of the more popular DVCSs. Your help, as always is much appreciated.

    Read the article

  • Boost Mersenne Twister: how to seed with more than one value?

    - by Eamon Nerbonne
    I'm using the boost mt19937 implementation for a simulation. The simulation needs to be reproducible, and that means storing and potentially reusing the RNG seeds later. I'm using the windows crypto api to generate the seed values because I need an external source for the seeds and not because of any particular guarantees of randomness. The output of any simulation run will have a note including the RNG seed - so the seed needs to be reasonably short. On the other hand, as part of the analysis of the simulation, I'll be comparing several runs - but to be sure that these runs are actually different, I'll need to use different seeds - so the seed needs to be long enough to avoid accidental collisions. I've determined that 64-bits of seeding should suffice; the chance of a collision will reach 50% after about 2^32 runs - that probability is low enough that the average error caused by it is negligible to me. Using just 32-bits of seed is tricky; the chance of a collision reaches 50% already after 2^16 runs; and that's a little too likely for my tastes. Unfortunately, the boost implementation either seeds with a full state vector - which is far, far too long - or a single 32-bit unsigned long - which isn't ideal. How can I seed the generator with more than 32-bits but less than a full state vector? I tried just padding the vector or repeating the seeds to fill the state vector, but even a cursory glance at the results shows that that generates poor results.

    Read the article

  • How to generate comments in hbm2java created POJO?

    - by jschoen
    My current setup using hibernate uses the hibernate.reveng.xml file to generate the various hbm.xml files. Which are then turned into POJOs using hbm2java. We spent some time while designing our schema, to place some pretty decent descriptions on the Tables and there columns. I am able to pull these descriptions into the hbm.xml files when generating them using hbm2jhbmxml. So I get something similar to this: <class name="test.Person" table="PERSONS"> <comment>The comment about the PERSONS table.</comment> <property name="firstName" type="string"> <column name="FIRST_NAME" length="100" not-null="true"> <comment>The first name of this person.</comment> </column> </property> <property name="middleInitial" type="string"> <column name="MIDDLE_INITIAL" length="1"> <comment>The middle initial of this person.</comment> </column> </property> <property name="lastName" type="string"> <column name="LAST_NAME" length="100"> <comment>The last name of this person.</comment> </column> </property> </class> So how do I tell hbm2java to pull and place these comments in the created Java files? I have read over this about editing the freemarker templates to change the way code is generated. I under stand the concept, but it was not to detailed about what else you could do with it beyond there example of pre and post conditions.

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >