Search Results

Search found 32492 results on 1300 pages for 'reporting database'.

Page 1179/1300 | < Previous Page | 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186  | Next Page >

  • [Apache Camel] Generating multiple files based on DB query in a nice way

    - by chantingwolf
    Dear all, I have a following question. I have to generate many files based on sql query. Let's say for example, I have get from database a list of orders made today and genarate file for each order and later store each file on ftp. Ideally I would like to get follewing. Not quite sure how to get it. from(MyBean).to(Ftp) The problem and main question is how to generate multiple messages by custom bean (for example). I am not sure if splitter EIP is ok in this case because in my case I have not just one message to split, but I just have to generate and send many messages. http://camel.apache.org/splitter.html I hope, someone meet this problem before. If the task will be to generate just one file - everything is quite simple - you need just fill Exchange.OutMessage (or something like this). But what about multiple files - I really can't get, how to manage this situation. P.S. Sorry if this question is stupid. I am novice in Camel (working with it just for coupe weeks). It's a great tool. Actually, that's why I want to use in in the best way. Thanks a lot.

    Read the article

  • T-SQL - Left Outer Joins - Fileters in the where clause versus the on clause.

    - by Greg Potter
    I am trying to compare two tables to find rows in each table that is not in the other. Table 1 has a groupby column to create 2 sets of data within table one. groupby number ----------- ----------- 1 1 1 2 2 1 2 2 2 4 Table 2 has only one column. number ----------- 1 3 4 So Table 1 has the values 1,2,4 in group 2 and Table 2 has the values 1,3,4. I expect the following result when joining for Group 2: `Table 1 LEFT OUTER Join Table 2` T1_Groupby T1_Number T2_Number ----------- ----------- ----------- 2 2 NULL `Table 2 LEFT OUTER Join Table 1` T1_Groupby T1_Number T2_Number ----------- ----------- ----------- NULL NULL 3 The only way I can get this to work is if I put a where clause for the first join: PRINT 'Table 1 LEFT OUTER Join Table 2, with WHERE clause' select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table1 LEFT OUTER join table2 --****************************** on table1.number = table2.number --****************************** WHERE table1.groupby = 2 AND table2.number IS NULL and a filter in the ON for the second: PRINT 'Table 2 LEFT OUTER Join Table 1, with ON clause' select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table2 LEFT OUTER join table1 --****************************** on table2.number = table1.number AND table1.groupby = 2 --****************************** WHERE table1.number IS NULL Can anyone come up with a way of not using the filter in the on clause but in the where clause? The context of this is I have a staging area in a database and I want to identify new records and records that have been deleted. The groupby field is the equivalent of a batchid for an extract and I am comparing the latest extract in a temp table to a the batch from yesterday stored in a partioneds table, which also has all the previously extracted batches as well. Code to create table 1 and 2: create table table1 (number int, groupby int) create table table2 (number int) insert into table1 (number, groupby) values (1, 1) insert into table1 (number, groupby) values (2, 1) insert into table1 (number, groupby) values (1, 2) insert into table2 (number) values (1) insert into table1 (number, groupby) values (2, 2) insert into table2 (number) values (3) insert into table1 (number, groupby) values (4, 2) insert into table2 (number) values (4)

    Read the article

  • Umlaute from JSP-page are misinterpreted

    - by Karin
    I'm getting Input from a JSP page, that can contain Umlaute. (e.g. Ä,Ö,Ü,ä,ö,ü,ß). Whenever an Umlaut is entered in the Input field an incorrect value gets passed on. e.g. If an "ä" (UTF-8:U+00E4) gets entered in the input field, the String that is extracted from the argument is "ä" (UTF-8: U+00C3 and U+00A4) It seems to me as if the UTF-8 hex encoding (which is c3 a4 for an "ä") gets used for the conversion. How can I retrieved the correct value? Here are snippets from the current implementation The JSP-page passes the input value "pk" on to the processing logic: <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> ... <input type="text" name="<%=pk.toString()%>" value="<%=value%>" size="70"/> <button type="submit" title="Edit" value='Save' onclick="action.value='doSave';pk.value='<%=pk.toString()%>'"><img src="icons/run.png"/>Save</button> The value gets retrieved from args and converted to a string: UUID pk = UUID.fromString(args.get("pk")); //$NON-NLS-1$ String value = args.get(pk.toString()); Note: Umlaute that are saved in the Database get displayed correctly on the page.

    Read the article

  • How to setup a Zend_Application with an application.ini and a user.ini

    - by Peter Smit
    I am using Zend_Application and it does not feel right that I am mixing in my application.ini both application and user configuration. What I mean with this is the following. For example, my application needs some library classes in the namespace MyApp_ . So in application.ini I put autoloaderNamespaces[] = "MyApp_". This is pure application configuration, no-one except a programmer would change these. On the other hand I put there a database configuration, something that a SysAdmin would change. My idea is that I would split options between an application.ini and an user.ini, where the options in user.ini take preference (so I can define standard values in application.ini). Is this a good idea? How can I best implement this? The idea's I have are Extending Zend_Application to take multiple config files Making an init function in my Bootstrap loading the user.ini Parsing the config files in my index.php and pass these to Zend_Application (sounds ugly) What shall I do? I would like to have the 'cleanest' solution, which is prepared for the future (newer ZF versions, and other developers working on the same app)

    Read the article

  • Can per-user randomized salts be replaced with iterative hashing?

    - by Chas Emerick
    In the process of building what I'd like to hope is a properly-architected authentication mechanism, I've come across a lot of materials that specify that: user passwords must be salted the salt used should be sufficiently random and generated per-user ...therefore, the salt must be stored with the user record in order to support verification of the user password I wholeheartedly agree with the first and second points, but it seems like there's an easy workaround for the latter. Instead of doing the equivalent of (pseudocode here): salt = random(); hashedPassword = hash(salt . password); storeUserRecord(username, hashedPassword, salt); Why not use the hash of the username as the salt? This yields a domain of salts that is well-distributed, (roughly) random, and each individual salt is as complex as your salt function provides for. Even better, you don't have to store the salt in the database -- just regenerate it at authentication-time. More pseudocode: salt = hash(username); hashedPassword = hash(salt . password); storeUserRecord(username, hashedPassword); (Of course, hash in the examples above should be something reasonable, like SHA-512, or some other strong hash.) This seems reasonable to me given what (little) I know of crypto, but the fact that it's a simplification over widely-recommended practice makes me wonder whether there's some obvious reason I've gone astray that I'm not aware of.

    Read the article

  • Authenticated WCF: Getting the Current Security Context

    - by bradhe
    I have the following scenario: I have various user's data stored in my database. This data was entered via a web app. We'd like to expose this data back to the user over a web service so that they can integrate their data with their applications. We would also like to expose some business logic over these services. As such we do not want to use OData. This is a multi-tenant application so I only want to expose their data back to them and not other users. Likewise, the business logic we expose should be relative to the authenticated user. I would like let the user use an OASIS scheme to authenticate with the web service -- WCF already allows for this out of the box as far as I understand -- or perhaps we can issue them certificates to authenticate with. That bit hasn't really been worked out yet. Here is a bit of pseudo-code of how I envision this would work within the service: function GetUsersData(id) var user := Lookup User based on Username from Auth Context var data := Get Data From Repository based on "user" return data end function For the business logic scenario I think it would look something like this: function PerformBusinessLogic(someData) var user := Lookup User based on Username from Auth Context var returnValue := Perform some logic based on supplied data return returnValue end function The hard bit here is getting the current username (or cert info in the cert scenario) that the user authenticated with! Does WCF even enable this scenario? If not would WSE3 enable this? Thanks,

    Read the article

  • Membership Provider users in different tables

    - by Mike
    I have an existing database with users and administrators in different tables. I am rewriting an existing website in ASP.net and need to decide - should I merge the two tables into one users table and just have one provider, OR leave the tables separated and have two different providers. Administrators, they need the ability to create, edit and delete users. I am thinking that the membership/profile provider way of editing users (i.e. System.Web.Profile.ProfileBase pro = System.Web.Profile.ProfileBase.Create("User1"); pro.Initialize("User1", true); txtEmail.Text = pro["SecondaryEmail"].ToString(); is the best way to edit users because the provider handles it? You cannot use this if you have two separate providers? (because they are both looking at different tables). Or should I make a whole lot of methods to edit the users for the administrators? UPDATE: Making a custom membership provider look at both tables is fine, but then what about the profile provider? The profile provider GetPropertyValues and SetPropertyValues would be going on the same set of properties for users and admins. Mike

    Read the article

  • Update a PDF to include an encrypted, hidden, unique identifier?

    - by Dave Jarvis
    Background The idea is this: Person provides contact information for online book purchase Book, as a PDF, is marked with a unique hash Person downloads book PDF passwords are annoying and extremely easy to circumvent. The ideal process would be something like: Generate hash based on contact information Store contact information and hash in database Acquire book lock Update an "include" file with hash text Generate book as PDF (using pdflatex) Apply hash to book Release book lock Send email with book download link Technologies The following technologies can be used (other programming languages are possible, but libraries will likely be limited to those supplied by the host): C, Java, PHP LaTeX files PDF files Linux Question What programming techniques (or open source software) should I investigate to: Embed a unique hash (or other mark) to a PDF Create a collusion-attack resistant mark Develop a non-fragile (e.g., PDF -> EPS -> PDF still contains the mark) solution Research I have looked at the following possibilities: Steganography Natural Language Processing (NLP) Convert blank pages in PDF to images; mark those images; reassemble PDF LaTeX watermark package ImageMagick Steganograhy requires keeping a master copy of the images, and I'm not sure if the watermark would survive PDF -> EPS -> PDF, or other types of conversion. LaTeX creates an image cache, so any steganographic process would have to intercept that process somehow. NLP introduces grammatical errors. Inserting blank pages as images is immediately suspect; it is easy to replace suspicious blank pages. The LaTeX watermark package draws visible marks. ImageMagick draws visible marks. What other solutions are possible? Related Links http://www.tcpdf.org/ invisible watermarks in images Thank you!

    Read the article

  • sp_addlinkedserver on sql server 2005 giving problem

    - by Jit
    I am trying to create a link server of a remote database(both the servers are SQL serve2005). I am able to connect that remote server from my SQL Server management studio. I used the following syntax to create it. EXEC sp_addlinkedserver @server = N'LINKSQL2005', @srvproduct = N'', @provider = N'SQLNCLI', @provstr = N'SERVER=IP Address of remote server ;User ID=XXXXXX;Password=***' I have provided the IP addressntax. and user name and password in the above syntax. The link server is getting created. But when I try to execute a query on it I get the error below. Query Used. select * from LINKSQL2005.<DBName>.dbo.<TableName> OLE DB provider "SQLNCLI" for linked server "LINKSQL2005" returned message "Communication link failure". Msg 10054, Level 16, State 1, Line 0 TCP Provider: An existing connection was forcibly closed by the remote host. Msg 18456, Level 14, State 1, Line 0 Login failed for user 'sa'. OLE DB provider "SQLNCLI" for linked server "LINKSQL2005" returned message "Invalid connection string attribute". Pls help me, where am I making mistake.

    Read the article

  • LINQ Query please help C#.Net.

    - by Paul Matthews
    I'm very new to LINQ and struggling to find the answers. I have a simple SQL query. Select ID, COUNT(ID) as Selections, OptionName, SUM(Units) as Units FROM tbl_Results GROUP BY ID, OptionName. The results I got were: '1' '4' 'Approved' '40' '2' '1' 'Rejected' '19' '3' '2' 'Not Decided' '12' Due to having to encrypt all my data in the database I'm unable to do sums. Therefore I have now brought back the data and decrypt it in the application layer. Results would be: '1' 'Approved' '10' '3' 'Not Deceided' '6' '2' 'Rejected' '19' '1' 'Approved' '15' '1' 'Approved' '5' '3' 'Not Deceided' '6' '1' 'Approved' '10' using a simple class I have called back the above results, and put them in a list class. public class results { public int ID {get;set;} public string OptionName {get;set;} public int Unit {get;set;} } I almost have the LINQ query to bring back the results like the SQL query at the top var q = from r in Results group p.Unit by p.ID int g select new {ID = g.Key, Selections = g.Count(), Units = g.Sum()}; How do I ensure my LINQ query also give me the Option Name? Also if I created a class called Statistics to hold my results how would I modify the LINQ query to give me list result set? public class results { public int ID {get;set;} public int NumberOfSelections { get; set; } public string OptionName {get;set;} public int UnitTotal {get;set;} }

    Read the article

  • How to change the view angle and label value of a chart .NET C#

    - by George
    Short Description I am using charts for a specific application where i need to change the view angle of the rendered 3D Pie chart and value of automatic labels from pie label names to corresponding pie values. This how the chart looks: Initialization This is how i initialize it: Dictionary<string, decimal> secondPersonsWithValues = HistoryModel.getSecondPersonWithValues(); decimal[] yValues = new decimal[secondPersonsWithValues.Values.Count]; //VALUES string[] xValues = new string[secondPersonsWithValues.Keys.Count]; //LABELS secondPersonsWithValues.Keys.CopyTo(xValues, 0); secondPersonsWithValues.Values.CopyTo(yValues, 0); incomeExpenseChart.Series["Default"].ChartType = System.Windows.Forms.DataVisualization.Charting.SeriesChartType.Pie; incomeExpenseChart.Series["Default"].Points.DataBindXY(xValues, yValues); incomeExpenseChart.ChartAreas["Default"].Area3DStyle.Enable3D = true; incomeExpenseChart.Series["Default"].CustomProperties = "PieLabelStyle=Outside"; incomeExpenseChart.Legends["Default"].Enabled = true; incomeExpenseChart.ChartAreas["Default"].Area3DStyle.LightStyle = System.Windows.Forms.DataVisualization.Charting.LightStyle.Realistic; incomeExpenseChart.Series["Default"]["PieDrawingStyle"] = "SoftEdge"; Basically i am querying data from database using the HistoryModel.getSecondPersonWithValues(); to get pairs as Dictionary<string, decimal> where key is the person and value is ammount. Problem #1 What i need is to be able to change the marked labels from person names to the ammounts or add another label of ammounts with the same colors (See Image). Problem #2 Another problem is that i need to change the view angle of 3D Pie chart. Maybe it's very simple and I just don't know the needed property or maybe i need to override some paint event. Either ways any kind of ways would be appriciated. Thanks in advance George.

    Read the article

  • DQL delete from multiple tables (doctrine)

    - by singer
    Need to perform DQL delete from multple related tables. In SQL it is something like this: DELETE r1,r2 FROM ComRealty_objects r1, com_realty_objects_phones r2 WHERE r1.id IN (10,20) AND r2.id_object IN (10,20) I need to perform this statement using DQL, but I'm stuck on this :( <?php $dql = Doctrine_Query::create() ->delete('phones, comrealtyobjects') ->from('ComRealtyObjects comrealtyobjects') ->from('ComRealtyObjectsPhones phones') ->whereIn("comrealtyobjects.id", $ids) ->whereIn("phones.id_object", $ids); echo($dql->getSqlQuery()); ?> But DQL parser gives me this result: DELETE FROM `com_realty_objects_phones`, `ComRealty_objects` WHERE (`id` IN (?) AND `id_object` IN (?)) Searching google and stack overflow I found this(useful) topic: http://stackoverflow.com/questions/2247905/what-is-the-syntax-for-a-multi-table-delete-on-a-mysql-database-using-doctrine But this is not exactly my case - there was delete from single table. If there is a way to override dql parser behaviour? Or maybe some other way to delete records from multiple tables using doctrine. Note: If you are using doctrine behaviours(Doctrine_Record_Generator) you need first to initialize those tables using Doctrine_Core::initializeModels() to perform DQL operations on them.

    Read the article

  • Represent multiple Null/Generic objects in an ActiveRecord association?

    - by slothbear
    I have a Casefile model that belongs_to a Doctor. In additional to all the "real" doctors, there are several generic Doctors: "self-treated", "not specified", and "removed" (it used to have a real doctor, but no longer does). I suspect there will be even more generic values in the future. I started with special "doctors" in the database, generated from seed. The generic Doctors only need to respond to the "name" and "real_doctor?" methods. This worked with one, was strained with two, and now feels completely broken. I want to change the behavior and can't figure out how to test it, a bad sign. Creating all the generic objects for testing is also trouble, including fake values to pass validation of the required Doctor attributes. The Null Object pattern works well for one generic object. The "name" method could check casefile.doctor.nil? and return "self-treated", as demonstrated by Craig Ambrose. What pattern should I use when there are multiple generic objects with very limited state?

    Read the article

  • Dependecy Injection with Massive ORM: dynamic trouble

    - by Sergi Papaseit
    I've started working on an MVC 3 project that needs data from an enormous existing database. My first idea was to go ahead and use EF 4.1 and create a bunch of POCO's to represent the tables I need, but I'm starting to think the mapping will get overly complicated as I only need some of the columns in some of the tables. (thanks to Steven for the clarification in the comments. So I thought I'd give Massive ORM a try. I normally use a Unit of Work implementation so I can keep everything nicely decoupled and can use Dependency Injection. This is part of what I have for Massive: public interface ISession { DynamicModel CreateTable<T>() where T : DynamicModel, new(); dynamic Single<T>(string where, params object[] args) where T : DynamicModel, new(); dynamic Single<T>(object key, string columns = "*") where T : DynamicModel, new(); // Some more methods supported by Massive here } And here's my implementation of the above interface: public class MassiveSession : ISession { public DynamicModel CreateTable<T>() where T : DynamicModel, new() { return new T(); } public dynamic Single<T>(string where, params object[] args) where T: DynamicModel, new() { var table = CreateTable<T>(); return table.Single(where, args); } public dynamic Single<T>(object key, string columns = "*") where T: DynamicModel, new() { var table = CreateTable<T>(); return table.Single(key, columns); } } The problem comes with the First(), Last() and FindBy() methods. Massive is based around a dynamic object called DynamicModel and doesn't define any of the above method; it handles them through a TryInvokeMethod() implementation overriden from DynamicObject instead: public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result) { } I'm at a loss on how to "interface" those methods in my ISession. How could my ISession provide support for First(), Last() and FindBy()? Put it another way, how can I use all of Massive's capabilities and still be able to decouple my classes from data access?

    Read the article

  • How does this code break the Law of Demeter?

    - by Dave Jarvis
    The following code breaks the Law of Demeter: public class Student extends Person { private Grades grades; public Student() { } /** Must never return null; throw an appropriately named exception, instead. */ private synchronized Grades getGrades() throws GradesException { if( this.grades == null ) { this.grades = createGrades(); } return this.grades; } /** Create a new instance of grades for this student. */ protected Grades createGrades() throws GradesException { // Reads the grades from the database, if needed. // return new Grades(); } /** Answers if this student was graded by a teacher with the given name. */ public boolean isTeacher( int year, String name ) throws GradesException, TeacherException { // The method only knows about Teacher instances. // return getTeacher( year ).nameEquals( name ); } private Grades getGradesForYear( int year ) throws GradesException { // The method only knows about Grades instances. // return getGrades().getForYear( year ); } private Teacher getTeacher( int year ) throws GradesException, TeacherException { // This method knows about Grades and Teacher instances. A mistake? // return getGradesForYear( year ).getTeacher(); } } public class Teacher extends Person { public Teacher() { } /** * This method will take into consideration first name, * last name, middle initial, case sensitivity, and * eventually it could answer true to wild cards and * regular expressions. */ public boolean nameEquals( String name ) { return getName().equalsIgnoreCase( name ); } /** Never returns null. */ private synchronized String getName() { if( this.name == null ) { this.name == ""; } return this.name; } } Questions How is the LoD broken? Where is the code breaking the LoD? How should the code be written to uphold the LoD?

    Read the article

  • SQL 2008: Using separate tables for each datatype to return single row

    - by Thomas C
    Hi all I thought I'd be flexible this time around and let the users decide what contact information the wish to store in their database. In theory it would look as a single row containing, for instance; name, adress, zipcode, Category X, Listitems A. Example FieldType table defining the datatypes available to a user: FieldTypeID, FieldTypeName, TableName 1,"Integer","tblContactInt" 2,"String50","tblContactStr50" ... A user the define his fields in the FieldDefinition table: FieldDefinitionID, FieldTypeID, FieldDefinitionName 11,2,"Name" 12,2,"Adress" 13,1,"Age" Finally we store the actual contact data in separate tables depending on its datatype. Master table, only contains the ContactID tblContact: ContactID 21 22 tblContactStr50: ContactStr50ID,ContactID,FieldDefinitionID,ContactStr50Value 31,21,11,"Person A" 32,21,12,"Adress of person A" 33,22,11,"Person B" tblContactInt: ContactIntID,ContactID,FieldDefinitionID,ContactIntValue 41,22,13,27 Question: Is it possible to return the content of these tables in two rows like this: ContactID,Name,Adress,Age 21,"Person A","Adress of person A",NULL 22,"Person B",NULL,27 I have looked into using the COALESCE and Temp tables, wondering if this is at all possible. Even if it is: maybe I'm only adding complexity whilst sacrificing performance for benefit in datastorage and user definition option. What do you think? Best Regards /Thomas C

    Read the article

  • oData/ADO.NET Data Services using LINQ-to-SQL with a decryption layer

    - by Program.X
    I have written an application using LINQ-to-SQL that submits a web form into a database. I absact the LINQ-to-SQL away using a Repository pattern. This repository has the basic methods: Get(), Save(), etc. As a development of the project, I needed to encrypt certain fields in the form. This was trivial, as I just added the encryption calls to the Get(), Save() methods in the Repository. Now, I want to put an oData layer over it, to allow RESTful extraction from MS Excel 2010 (when it comes out). I have this working, after a few stumbles on useless error messages, etc. However, obviously, those encrypted fields are still encrypted. My repository pattern would have decrypted these for me. As far as I know, I have to directly bind my oData service to the LINQ-to-SQL context for the schema, etc. to work - unless I enter a whole world of pain (any URLs appreciated). Is there a way I can insert my encryption/decryption layer into the request so decryption is done "on the fly"? I looked at the OnStartProcessingRequest() overload of DataService but this doesn't seem that useful.

    Read the article

  • S3 Backup Memory Usage in Python

    - by danpalmer
    I currently use WebFaction for my hosting with the basic package that gives us 80MB of RAM. This is more than adequate for our needs at the moment, apart from our backups. We do our own backups to S3 once a day. The backup process is this: dump the database, tar.gz all the files into one backup named with the correct date of the backup, upload to S3 using the python library provided by Amazon. Unfortunately, it appears (although I don't know this for certain) that either my code for reading the file or the S3 code is loading the entire file in to memory. As the file is approximately 320MB (for today's backup) it is using about 320MB just for the backup. This causes WebFaction to quit all our processes meaning the backup doesn't happen and our site goes down. So this is the question: Is there any way to not load the whole file in to memory, or are there any other python S3 libraries that are much better with RAM usage. Ideally it needs to be about 60MB at the most! If this can't be done, how can I split the file and upload separate parts? Thanks for your help. This is the section of code (in my backup script) that caused the processes to be quit: filedata = open(filename, 'rb').read() content_type = mimetypes.guess_type(filename)[0] if not content_type: content_type = 'text/plain' print 'Uploading to S3...' response = connection.put(BUCKET_NAME, 'daily/%s' % filename, S3.S3Object(filedata), {'x-amz-acl': 'public-read', 'Content-Type': content_type})

    Read the article

  • searching for a programming platform with hot code swap

    - by Andreas
    I'm currently brainstorming over the idea how to upgrade a program while it is running. (Not while debugging, a "production" system.) But one thing that is required for it, is to actually submit the changed source code or compiled byte code into the running process. Pseudo Code var method = typeof(MyClass).GetMethod("Method1"); var content = //get it from a database (bytecode or source code) SELECT content FROM methods WHERE id=? AND version=? method.SetContent(content); At first, I want to achieve the system to work without the complexity of object-orientation. That leads to the following requirements: change source code or byte code of function drop functions add new functions change the signature of a function With .NET (and others) I could inject a class via an IoC and could thus change the source code. But the loading would be cumbersome, because everything has to be in an Assembly or created via Emit. Maybe with Java this would be easier? The whole ClassLoader is replacable, I think. With JavaScript I could achieve many of the goals. Simply eval a new function (MyMethod_V25) and assign it to MyClass.prototype.MyMethod. I think one can also drop functions somehow with "del" Which general-purpose platform can handle such things?

    Read the article

  • Multiple WCF calls for a single ASP.NET page load

    - by Rodney Burton
    I have an existing asp.net web application I am redesigning to use a service architecture. I have the beginnings of an WCF service which I am able to call and perform functions with no problems. As far as updating data, it all makes sense. For example, I have a button that says Submit Order, it sends the data to the service, which does the processing. Here's my concern: If I have an ASP.NET page that shows me a list of orders (View Orders page), and at the top I have a bunch of drop down lists for order types, and other search criteria which is populated by querying different tables from the database (lookup tables, etc). I am hoping to eventually completely decouple the web application from the DB, and use data contracts to pass information between the BLL, the SOA, and the web app. With that said, how can I reduce the # of WCF calls needed to load my "View Orders" page? I would need to make 1 call get the list of orders, and 1 call for each drop down list, etc because those are populated by individual functions in my BLL. Is it good architecture to create a web service method that returns back a specialized data contract that consists of everything you would need to display a View Orders page, in 1 shot? Something like this pseudocode: public class ViewOrderPageDTO { public OrderDTO[] Orders { get; set; } public OrderTypesDTO[] OrderTypes { get; set; } public OrderStatusesDTO[] OrderStatuses { get; set; } public CustomerListDTO[] CustomerList { get; set; } } Or is it better practice in the page_load event to make 5 or 6 or even 15 individual calls to the SOA to get the data needed to load the page? Therefore, bypassing the need for specialized wcf methods or DTO's that conglomerate other DTO? Thanks for your input and suggestions.

    Read the article

  • In PHP... best way to turn string representation of a folder structure into nested array

    - by Greg Frommer
    Hi everyone, I looked through the related questions for a similar question but I wasn't seeing quite what I need, pardon if this has already been answered already. In my database I have a list of records that I want represented to the user as files inside of a folder structure. So for each record I have a VARCHAR column called "FolderStructure" that I want to identify that records place in to the folder structure. The series of those flat FolderStructure string columns will create my tree structure with the folders being seperated by backslashes (naturally). I didn't want to add another table just to represent a folder structure... The 'file' name is stored in a separate column so that if the FolderStructure column is empty, the file is assumed to be at the root folder. What is the best way to turn a collection of these records into a series of HTML UL/LI tags... where each LI represents a file and each folder structure being an UL embedded inside it's parent?? So for example: file - folderStructure foo - bar - firstDir blue - firstDir/subdir would produce the following HTML: <ul> <li>foo</li> <ul> <li> bar </li> <ul> <li> blue </li> </ul> </ul> </ul> Thanks

    Read the article

  • Problem installing mysql gem on Snow Leopard: uninitialized constant MysqlCompat::MysqlRes

    - by emson
    Hi All I've got a problem trying to install the Ruby mysql gem driver. I recently upgraded to Snow Leopard and did the Hivelogic manual install of MySQL. This all seems to work fine as I can access mysql from the command line and make changes to the database. My problem is that if I now use rake db:migrate I get: rake aborted! uninitialized constant MysqlCompat::MysqlRes (See full trace by running task with --trace) Now it appears that my mysql gem isn't working correctly as I can access MySQL fine from Python using the Python driver (which I compiled to). I therefore tried to rebuild the gem using the following command from this site: http://techliberty.blogspot.com/, (incidentally I am using a recent Intel MacBook Pro): sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/usr/local/mysql/bin/mysql_config This compiles although I get No definition for the documentation: Building native extensions. This could take a while... Successfully installed mysql-2.8.1 1 gem installed Installing ri documentation for mysql-2.8.1... No definition for next_result No definition for field_name ... I'm a little stumped as my mysql_config is located in the correct place: /usr/local/mysql/bin/mysql_config And I have removed all other instances of the mysql gem, from my system. Any suggestions would be greatly appreciated. Many thanks. PS I saw this previous post http://stackoverflow.com/questions/1332207/uninitialized-constant-mysqlcompatmysqlres-using-mms2r-gem but it doesn't seem applicable for my version.

    Read the article

  • Complex derived attributes in Django models

    - by rabidpebble
    What I want to do is implement submission scoring for a site with users voting on the content, much like in e.g. reddit (see the 'hot' function in http://code.reddit.com/browser/sql/functions.sql). My submission model currently keeps track of up and down vote totals. Currently, when a user votes I create and save a related Vote object and then use F() expressions to update the Submission object's voting totals. The problem is that I want to update the score for the submission at the same time, but F() expressions are limited to only simple operations (it's missing support for log(), date_part(), sign() etc.) From my limited experience with Django I can see 4 options here: extend F() somehow (haven't looked at the code yet) to support the missing SQL functions; this is my preferred option and seems to fit within the Django framework the best define a scoring function (much like reddit's 'hot' function) in my database, and have Django use the value of that function for the value of the score field; as far as I can tell, #2 is not possible wrap my two step voting process in a suitably isolated transaction so that I can calculate the voting totals in Python and then update the Submission's voting totals without fear that another vote against the submission could be added/changed in the meantime; I'm hesitant to take this route because it seems overly complex - what is a "suitably isolated transaction" in this case anyway? use raw SQL; I would prefer to avoid this entirely -- what's the point of an ORM if I have to revert to SQL for such a common use case as this! (Note that this coming from somebody who loves sprocs, but is using Django for ease of development.) Before I embark on this mission to extend F() (which I'm not sure is even possible), am I about to reinvent the wheel? Is there a more standard way to do this? It seems like such a common use case and yet in an hour of searching I have yet to find a common solution...

    Read the article

  • Architectural conundrum

    - by Dejan
    The worst thing when working on a one man project is the lack of input that you usually get from your coworkers. And because of the lack of that you tend to make obvious mistakes. After going down that road for some time I would need some help from the community. I started a little home-brew project that should turn into a portal of some sorts. And the main thing that is bothering me is the persistence layer that i have concocted. It should be completely separated from the presentation layer for starters and a OR mapper is also somewhere. This is because I have multiple data stores that have to be used. So the base idea was that the individual "repositories" operate each on their individual database and that the business layer then aggregates the business objects which are then transformed in the presentation layer into view objects. The main problem I face is the following: Multiple classes for the same concept - There is a DAL representation of a user and BL representation of user and a view representation of a user. I can handle the transformation with a tool but is this really the right way. I mean they are all nicely separated, but the overhead is quite something. What do you think? Am I going too deep into the separation of concern rabbit hole or is this still normal?

    Read the article

  • Application log aggregation, management and notifications...

    - by Matthew Savage
    I'm wondering what everyone is using for logging, log management and log aggregation on their systems. I am working in a company which uses .NET for all it's applications and all systems are Windows based. Currently each application looks after its own logging and notifications of failures (e.g. if app A fails it will send out its own 'call for help' to an admin). While this current practice works its a bit hacky and hard to manage. I've been trying to find some options for making this work better and I've come up with the following: log4net & Chainsaw (ah, if it works). Logging via log4net or another framework into a central database & rolling our own management tool. Logging to the Windows event log and using MOM or System Center Operations Manager to aggregate and manage each of these servers & their apps. A hand-rolled solution to suck all the log files into one point and work some magic across them. Essentially what we are after is something which can pull log entries all together and allow for some analytics to be run across them, plus use a kind of event based system to, for example, send out a warning email when there have been 30+ warning level logs for an application in the last x minutes. So is there anything I've missed, or something someone else can suggest?

    Read the article

< Previous Page | 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186  | Next Page >