Search Results

Search found 26256 results on 1051 pages for 'information science'.

Page 331/1051 | < Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >

  • How to return output from .Net Dll to the calling Application

    - by sachin
    I have to create one VB.Net Dll for VB.Net Application.In DLL there will be function to calculate the fee based on some parameter which I pass when call the function from appllication, output of calculated fee would be this type **Validations are not selected. Rate information: IN:11/14/20113:12:38 PM; OUT:11/15/20113:12:38 PM; Fee:3; Description:$3 Fixed IN:11/14/20113:12:38 PM; OUT:11/15/20113:12:38 PM; Fee:1; Description:$1 Fixed Sub Total: IN: 11/14/20113:12:38 PM; OUT: 11/15/20113:12:38 PM; Fee:4; Description: Rate Group1 Rate information: IN:11/14/20113:12:38 PM; OUT:11/15/20113:12:38 PM; Fee:3; Description:$3 Fixed Sub Total: IN: 11/14/20113:12:38 PM; OUT: 11/15/20113:12:38 PM; Fee:3; Description: Rate Group1** Can anybody tell me how can I return output of this type to the application ,so that I can use it in that application.

    Read the article

  • Should we retire the term "Context"?

    - by MrGumbe
    I'm not sure if there is a more abused term in the world of programming than "Context." A word that has a very clear meaning in the English language has somehow morphed into a hot mess in software development, where the definition where the connotation can be completely different based on what library you happen to be developing in. Tomcat uses the word context to mean the configuration of a web application. Java applets, on the other hand, use an AppletContext to define attributes of the browser and HTML tag that launched it, but the BeanContext is defined as a container. ASP.NET uses the HttpContext object as a grab bag of state - containing information about the current request / response, session, user, server, and application objects. Context Oriented Programming defines the term as "Any information which is computationally accessible may form part of the context upon which behavioral variations depend," which I translate as "anything in the world." The innards of the Windows OS uses the CONTEXT structure to define properties about the hardware environment. The .NET installation classes, however, use the InstallContext property to represent the command line arguments entered to the installation class. The above doesn't even touch how all of us non-framework developers have used the term. I've seen plenty of developers fall into the subconscious trap of "I can't think of anything else to call this class, so I'll name it 'WidgetContext.'" Do you all agree that before naming our class a "Context," we may want to first consider some more descriptive terms? "Environment", "Configuraton", and "ExecutionState" come readily to mind.

    Read the article

  • Conventional Approaches for Passing Data to Back-End?

    - by Calvin
    Hi guys, I'm fairly new to web development, so please pardon the painfully newbie question that's about to follow. My computer science class group and I are developing a web application for class, which is built in Python (under Django) and uses jQuery on the front end. It's primarily an AJAX-ified application, and passing data from the backend to the front end is done through AJAX calls to specific URLs which return JSON. This is probably a stupid question, but what's the conventional approach for passing data in the opposite direction? We don't want to reload the page or anything, so is it an AJAX pass going the other way or something? Thanks in advance for your help!

    Read the article

  • How can I debug a session

    - by Organ Grinding Monkey
    I have been asked to work of a very large web application and deploy it. The problem that I'm facing here is that when I deploy the application and more that 1 user logs into the system, the sessions seem to cross over i.e: Person A logs in and works on the site, all good. When person B logs in, person A will then be logged in as person B as well. I have been asked to work of a very large web application and deploy it. The problem that I'm facing here is that when I deploy the application and more that 1 user logs into the system, the sessions seem to cross over i.e: Person A logs in and works on the site, all good. When person B logs in, person A will then be logged in as person B as well. If anyone has experienced this behaviour before and can steer me in the right direction, that would be first prize, Second prize would be to show me how I can debug this situation so that I can find out where the problem is and fix it. Some information about the application. From what I've been told and what I've seen within the app is that it started as a .Net 1.1 application and got upgraded to .Net 2 and that's why the log in system was done the way it is. (The application is huge and now complete and that's why I cant rewrite the whole user authentication process, it will just take to long and I don't know what effect it might have) All the Logged in User information is stored in properties that have been added in the Global.asax.vb file. (could this be the problem?) Any help here would be greatly appreciated

    Read the article

  • ReflectionTypeLoadException when I try to run Enable-Migrations with Entity Framework 5.0

    - by Eric Anastas
    I'm trying to use Entity Framework for the first time on one of my projects. I'm using the code first workflow to automatically create my database. Intitaly setting up the database worked fine. Now I'm trying to migrate changes in my classes into the database. The tutorial I'm reading says I need to run "Enable-Migrations" in the package manager console. Yet when I do this I get the following error PM> Enable-Migrations System.Reflection.ReflectionTypeLoadException: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. at System.Reflection.RuntimeModule.GetTypes(RuntimeModule module) at System.Reflection.RuntimeModule.GetTypes() at System.Reflection.Assembly.GetTypes() at System.Data.Entity.Migrations.Design.ToolingFacade.BaseRunner.FindType[TBase](String typeName, Func`2 filter, Func`2 noType, Func`3 multipleTypes, Func`3 noTypeWithName, Func`3 multipleTypesWithName) at System.Data.Entity.Migrations.Design.ToolingFacade.GetContextTypeRunner.RunCore() at System.Data.Entity.Migrations.Design.ToolingFacade.BaseRunner.Run() Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. What am I doing wrong? How do I retrieve the loader exceptions property? Also NuGet says I have EF 5.0, but Version property of the EntityFramework item in my project references says 4.4.0.0. I'm not sure if this is related.

    Read the article

  • Django - partially validating form

    - by aeter
    I'm new to Django, trying to process some forms. I have this form for entering information (creating a new ad) in one template: class Ad(models.Model): ... category = models.CharField("Category",max_length=30, choices=CATEGORIES) sub_category = models.CharField("Subcategory",max_length=4, choices=SUBCATEGORIES) location = models.CharField("Location",max_length=30, blank=True) title = models.CharField("Title",max_length=50) ... I validate it with "is_valid()" just fine. Basically for the second validation (another template) I want to validate only against "category" and "sub_category": In another template, I want to use 2 fields from the same form ("category" and "sub_category") for filtering information - and now the "is_valid()" method would not work correctly, cause it validates the entire form, and I need to validate only 2 fields. I have tried with the following: ... if request.method == 'POST': # If a filter for data has been submitted: form = AdForm(request.POST) try: form = form.clean() category = form.category sub_category = form.sub_category latest_ads_list = Ad.objects.filter(category=category) except ValidationError: latest_ads_list = Ad.objects.all().order_by('pub_date') else: latest_ads_list = Ad.objects.all().order_by('pub_date') form = AdForm() ... but it doesn't work. How can I validate only the 2 fields category and sub_category?

    Read the article

  • Cross-Origin Resource Sharing (CORS) - am I missing something here?

    - by David Semeria
    I was reading about CORS (https://developer.mozilla.org/en/HTTP_access_control) and I think the implementation is both simple and effective. However, unless I'm missing something, I think there's a big part missing from the spec. As I understand, it's the foreign site that decides, based on the origin of the request (and optionally including credentials), whether to allow access to its resources. This is fine. But what if malicious code on the page wants to POST a user's sensitive information to a foreign site? The foreign site is obviously going to authenticate the request. Hence, again if I'm not missing something, CORS actually makes it easier to steal sensitive information. I think it would have made much more sense if the original site could also supply an immutable list of servers its page is allowed to access. So the expanded sequence would be: 1) Supply a page with list of acceptable CORS servers (abc.com, xyz.com, etc) 2) Page wants to make an XHR request to abc.com - the browser allows this because it's in the allowed list and authentication proceeds as normal 3) Page wants to make an XHR request to malicious.com - request rejected locally (ie by the browser) because the server is not in the list. I know that malicious code could still use JSONP to do its dirty work, but I would have thought that a complete implementation of CORS would imply the closing of the script tag multi-site loophole. I also checked out the official CORS spec (http://www.w3.org/TR/cors) and could not find any mention of this issue.

    Read the article

  • Need help with threads in a client/server

    - by nunos
    For college, I am developing a local relay chat. I have to program a chat server and client that will only work on sending messages on different terminal windows on the same computer with threads and fifos. The fifos part I am having no trouble, the threads part is the one that is giving me some headaches. The server has one thread for receiving commands from a fifo (used by all clients) and another thread for each client that is connected. For each client that is connected I need to know a certain information. Firstly, I was using global variables, which worked as longs as there was only one client connected, which is much of a chat, to chat alone. So, ideally I would have some data like: -nickname -name -email -etc... per client that is connected. However, I don't know how to do that. I could create a client_data[MAX_NUMBER_OF_THREADS] where client_data was a struct with everything I needed to have access to, but this would require to, in every communication between server and client to ask for the id of the client in the array client_data and that does not seem very pratical I could also instantiate a client_data immediately after creating the thread but it would only be available in that block, and that is not very pratical either. As you can see I am in need of a little guidance here. Any comment, piece of code or link to any relevant information is greatly appreciated. Thanks.

    Read the article

  • Should developers *really* have private offices?

    - by Aron Rotteveel
    We will probably be moving within a year, so we have to make some decisions regarding office layout. At the moment, our company is basically one big office. When our developers can't bother to be disturbed at all, we all have our own headphones to mute the outside world. Still, it seems a lot of people feel that private offices are no doubt the way to go. From Joel's article Private Offices Redux: Not every programmer in the world wants to work in a private office. In fact quite a few would tell you unequivocally that they prefer the camaradarie and easy information sharing of an open space. Don't fall for it. They also want M&Ms for breakfast and a pony. Open space is fun but not productive. Even though I can understand the benefit on productivity, does having a private office really result in more net productivity? There seem to be plenty of companies that create wide open spaces and still maintain good productivity. Or so it seems. (I should mention many of them use cubicles, though) What is your opinion on this? What does your company do? Is there some middle ground in this? Some more related information on this matter: Private Offices Redux The new Fog Creek office A Field Guide to Developers Gmail recruitment page. Found this last one somewhat remarkable since the Gmail recruitment page promotes the "wide open space" idea.

    Read the article

  • Optimization of SQL query regarding pair comparisons

    - by InfiniteSquirrel
    Hi, I'm working on a pair comparison site where a user loads a list of films and grades from another site. My site then picks two random movies and matches them against each other, the user selects the better of the two and a new pair is loaded. This gives a complete list of movies ordered by whichever is best. The database contains three tables; fm_film_data - this contains all imported movies fm_film_data(id int(11), imdb_id varchar(10), tmdb_id varchar(10), title varchar(255), original_title varchar(255), year year(4), director text, description text, poster_url varchar(255)) fm_films - this contains all information related to a user, what movies the user has seen, what grades the user has given, as well as information about each film's wins/losses for that user. fm_films(id int(11), user_id int(11), film_id int(11), grade int(11), wins int(11), losses int(11)) fm_log - this contains records of every duel that has occurred. fm_log(id int(11), user_id int(11), winner int(11), loser int(11)) To pick a pair to show the user, I've created a mySQL query that checks the log and picks a pair at random. SELECT pair.id1, pair.id2 FROM (SELECT part1.id AS id1, part2.id AS id2 FROM fm_films AS part1, fm_films AS part2 WHERE part1.id <> part2.id AND part1.user_id = [!!USERID!!] AND part2.user_id = [!!USERID!!]) AS pair LEFT JOIN (SELECT winner AS id1, loser AS id2 FROM fm_log WHERE fm_log.user_id = [!!USERID!!] UNION SELECT loser AS id1, winner AS id2 FROM fm_log WHERE fm_log.user_id = [!!USERID!!]) AS log ON pair.id1 = log.id1 AND pair.id2 = log.id2 WHERE log.id1 IS NULL ORDER BY RAND() LIMIT 1 This query takes some time to load, about 6 seconds in our tests with two users with about 800 grades each. I'm looking for a way to optimize this but still limit all duels to appear only once. The server runs MySQL version 5.0.90-community.

    Read the article

  • tiemout for a function that waits indefiinitely (like listen())

    - by Fantastic Fourier
    Hello, I'm not quite sure if it's possible to do what I'm about to ask so I thought I'd ask. I have a multi-threaded program where threads share a memory block to communicate necessary information. One of the information is termination of threads where threads constantly check for this value and when the value is changed, they know it's time for pthread_exit(). One of the threads contains listen() function and it seems to wait indefinitely. This can be problematic if there are nobody who wants to make connection and the thread needs to exit but it can't check the value whether thread needs to terminate or not since it's stuck on listen() and can't move beyond. while(1) { listen(); ... if(value == 1) pthread_exit(NULL); } My logic is something like that if it helps illustrate my point better. What I thought would solve the problem is to allow listen() to wait for a duration of time and if nothing happens, it moves on to next statement. Unfortunately, none of two args of listen() involves time limit. I'm not even sure if I'm going about the right way with multi-threaded programming, I'm not much experienced at all. So is this a good approach? Perhaps there is a better way to go about it? Thanks for any insightful comments.

    Read the article

  • architecture - centraled location for different modules (cms, webapplications, ...) - best practise

    - by NicoJuicy
    Let's just say that i want to create a cms + other online applications. I want them all to integrate into a central location, but they also have to be available seperately (not everyone want's more than the cms solution). Would i create a huge central application that contains all the database, which communicates through a webserice with the "standalone - integrated" modules? Or would i create them seperately and the only thing that the "central" application would do is syncing the information (eg. the cms and another solution can have the same tables (eg. clients or employees). Or do you have another idea? (i know i'm a little vague, but i can't "give" a lot of details because of work - contract). If someone has all the "packages" it should be possible for the central application to integrate all the modules at one place! Or if someone has more than 1 module, it should combine this on the website. What i thought is best, is that the central location contains only the users and their rights (eg. cms - all rights, ...), and the information get synced with every change. (module cms, adding a new client - store locally and send data to the central location, central location - send to modules = table clients updated everywhere) This way it is easy if someone only "bought" a module, they can sync it easily through the complete architecture. I hope i made myself clear!

    Read the article

  • could not execute a stored procedure(using DAAB) from a client(aspx page) to a wcf service

    - by user1144695
    i am trying to store data to sql database from a asp.net client website through a stored procedure(using DAAB) in a wcf service hosted in a asp.net empty website.When i try to store data to the DB i get the following error: ** - The server was unable to process the request due to an internal error. For more information about the error, either turn on IncludeExceptionDetailInFaults (either from ServiceBehaviorAttribute or from the <serviceDebug> configuration behavior) on the server in order to send the exception information back to the client, or turn on tracing as per the Microsoft .NET Framework SDK documentation and inspect the server trace logs. ** When i try to debug i get the following exception: Activation error occured while trying to get instance of type Database, key "" in the code-- Database db = EnterpriseLibraryContainer.Current.GetInstance<Database>("MyInstance"); where my app.config is <?xml version="1.0"?> <configuration> <configSections> <section name="dataConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Data.Configuration.DatabaseSettings, Microsoft.Practices.EnterpriseLibrary.Data, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="true"/> </configSections> <dataConfiguration defaultDatabase="MyInstance"/> <connectionStrings> <add name="MyInstance" connectionString="Data Source=BLRKDAS307581\KD;Integrated Security=True;User ID=SAPIENT\kdas3;Password=ilove0LINUX" providerName="System.Data.SqlClient" /> </connectionStrings> <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> </startup> </configuration> Can anyone help me with it? Thanks in advance...

    Read the article

  • How can I efficiently manipulate 500k records in SQL Server 2005?

    - by cdeszaq
    I am getting a large text file of updated information from a customer that contains updates for 500,000 users. However, as I am processing this file, I often am running into SQL Server timeout errors. Here's the process I follow in my VB application that processes the data (in general): Delete all records from temporary table (to remove last month's data) (eg. DELETE * FROM tempTable) Rip text file into the temp table Fill in extra information into the temp table, such as their organization_id, their user_id, group_code, etc. Update the data in the real tables based on the data computed in the temp table The problem is that I often run commands like UPDATE tempTable SET user_id = (SELECT user_id FROM myUsers WHERE external_id = tempTable.external_id) and these commands frequently time out. I have tried bumping the timeouts up to as far as 10 minutes, but they still fail. Now, I realize that 500k rows is no small number of rows to manipulate, but I would think that a database purported to be able to handle millions and millions of rows should be able to cope with 500k pretty easily. Am I doing something wrong with how I am going about processing this data? Please help. Any and all suggestions welcome.

    Read the article

  • Pre Project Documentation

    - by DeanMc
    I have an issue that I feel many programmers can relate to... I have worked on many small scale projects. After my initial paper brain storm I tend to start coding. What I come up with is usually a rough working model of the actual application. I design in a disconnected fashion so I am talking about underlying code libraries, user interfaces are the last thing as the library usually dictates what is needed in the UI. As my projects get bigger I worry that so should my "spec" or design document. The above paragraph, from my investigations, is echoed all across the internet in one fashion or another. When a UI is concerned there is a bit more information but it is UI specific and does not relate to code libraries. What I am beginning to realise is that maybe code is code is code. It seems from my extensive research that there is no 1:1 mapping between a design document and the code. When I need to research a topic I dump information into OneNote and from there I prioritise features into versions and then into related chunks so that development runs in a fairly linear fashion, my tasks tend to look like so: Implement Binary File Reader Implement Binary File Writer Create Object to encapsulate Data for expression to the caller Now any programmer worth his salt is aware that between those three to do items could be a potential wall of code that could expand out to multiple files. I have tried to map the complete code process for each task but I simply don't think it can be done effectively. By the time one mangles pseudo code it is essentially code anyway so the time investment is negated. So my question is this: Am I right in assuming that the best documentation is the code itself. We are all in agreement that a high level overview is needed. How high should this be? Do you design to statement, class or concept level? What works for you?

    Read the article

  • Managing StringBuilder Resources

    - by Jim Fell
    My C# (.NET 2.0) application has a StringBuilder variable with a capacity of 2.5MB. Obviously, I do not want to copy such a large buffer to a larger buffer space every time it fills. By that point, there is so much data in the buffer anyways, removing the older data is a viable option. Can anyone see any obvious problems with how I'm doing this (i.e. am I introducing more performance problems than I'm solving), or does it look okay? tText_c = new StringBuilder(2500000, 2500000); private void AppendToText(string text) { if (tText_c.Length * 100 / tText_c.Capacity > 95) { tText_c.Remove(0, tText_c.Length / 2); } tText_c.Append(text); } EDIT: Additional information: In this application new data is received very rapidly (on the order of milliseconds) through a serial connection. I don't want to populate the multiline textbox with this new information so frequently because that kills the performance of the application, so I'm saving it to a StringBuilder. Every so often, the application copies the contents of the StringBuilder to the textbox and wipes out the StringBuilder contents.

    Read the article

  • Are there any less costly alternatives to Amazon's Relational Database Services (RDS)?

    - by swapnonil
    Hi All, I have the following requirement. I have with me a database containing the contact and address details of at least 2000 members of my school alumni organization. We want to store all that information in a relation model so that This data can be created and edited on demand. This data is always backed up and should be simple to restore in case the master copy becomes unusable. All sensitive personal information residing in this database is guaranteed to be available only to authorized users. This database won't be online in the first 6 months. It will become online only after a website is built on top of it. I am not a DBA and I don't want to spend time doing things like backups. I thought Amazon's RDS with it's automatic backup facility was the perfect solution for our needs. The only problem is that being a voluntary organization we cannot spare the monthly $100 to $150 fees this service demands. So my question is, are there any less costlier alternatives to Amazon's RDS?

    Read the article

  • SQL Server 2008 R2

    - by kevchadders
    Hi all, I heard on the grapevine that Microsoft will be releasing SQL Server 2008 R2 within a year. Though I initially thought this was a patch for the just released 2008 version, I realised that it’s actually a completely different version that you would have to pay for. (Am I correct, if you had SQL Server 2008, would you have to pay again if you wanted to upgrade to 2008 R2?) If you’re already running SQL Server 2008, would you say it’s still worth the upgrade? Or does it depend on the size of your company and current setup. For what I’ve initially read, I do get the impression that this version would be more useful for the very high end hardware setup where you want to have very good scalability. With regard to programming, is there any extra enhancements/support in there which you’re aware of that will significantly help .NET Products/Web Development? Initially found a couple of links on it, but I was wondering if anyone had anymore info to share on subject as I couldn’t find nothing on SO about it? Thanks. New SQL Server R2 Microsoft Link on it. Microsoft SQL 2008 R2 EDIT: More information based on the Express Edition One very interesting thing about SQL Server 2008 R2 concerns the Express edition. Previous express versions of SQL Server Express had a database size limit of 4GB. With SQL Server Express 2008 R2, this has now been increased to 10GB !! This now makes the FREE express edition a much more viable option for small & medium sized applications that are relatively light on database requirements. Bear in mind, that this limit is per database, so if you coded your application cleverly enough to use a separate database for historical/archived data, you could squeeze even more out of it! For more information, see here: http://blogs.msdn.com/sqlexpress/archive/2010/04/21/database-size-limit-increased-to-10gb-in-sql-server-2008-r2-express.aspx

    Read the article

  • Check a list of packages to install with apt-get

    - by Joel
    I am writing a post-install script for Ubuntu in Perl (same script as seen here). One of the steps is to install a list of packages. The problem is that if apt-get install fails in some of many different ways for any one of the packages the script dies badly. I would like to prevent that from happening. This happens because of the ways that apt-get install fails for packages that it doesn't like. For example when I try to install a nonsense word (i.e. typed in the wrong package name) $ sudo apt-get install oblihbyvl Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package oblihbyvl but if instead the package name has been obsoleted (installing handbrake from ppa) $ sudo apt-get install handbrake Reading package lists... Done Building dependency tree Reading state information... Done Package handbrake is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'handbrake' has no installation candidate $ apt-cache search handbrake handbrake-cli - versatile DVD ripper and video transcoder - command line handbrake-gtk - versatile DVD ripper and video transcoder - GTK GUI I have tried parsing the results of apt-cache and apt-get -s install to try to catch all possibilities before doing the install, but I seem to keep finding new ways to allow failures to continue to the actual install system command. My question is, is there some facility either in Perl (e.g. a module, though I would like to avoid installing modules if possible as this is supposed to be the first thing run after a new install of Ubuntu) or apt-* or dpkg that would let me be sure that the packages are all available to be installed before installing and if not fail gracefully in some way that lets the user decide what to do?

    Read the article

  • Reducing Dimension For SVM in Sensor Network

    - by iinception
    Hi Everyone, I am looking for some suggestions on a problem that I am currently facing. I have a set of sensor say S1-S100 which is triggered when some event E1-E20 is performed. Assume, normally E1 triggers S1-S20, E2 triggers S15-S30, E3 triggers S20-s50 etc and E1-E20 are completely independent events. Occasionally an event E might trigger any other unrelated sensor. I am using ensemble of 20 svm to analyze each event separately. My features are sensor frequency F1-F100, number of times each sensor is triggered and few other related features. I am looking for a technique that can reduce the dimensionality of the sensor feature(F1-F100)/ or some techniques that encompasses all of the sensor and reduces the dimension too(i was looking for some information theory concept for last few days) . I dont think averaging, maximization is a good idea as I risk loosing information(it did not give me good result). Can somebody please suggest what am I missing here? A paper or some starting idea... Thanks in advance.

    Read the article

  • Guidance required: FIrst time gonna work with real high end database (size = 50GB).

    - by claws
    I got a project of designing a Database. This is going to be my first big scale project. Good thing about it is information is mostly organized & currently stored in text files. The size of this information is 50GB. There are going to be few millions of records in each Table. Its going to have around 50 tables. I need to provide a web interface for searching & browsing. I'm going to use MySQL DBMS. I've never worked with a database more than 200MB before. So, speed & performance was never a concern but I followed things like normalization & Indexes. I never used any kind of testing/benchmarking/queryOptimization/whatever because I never had to care about them. But here the purpose of creating a database is to make it quickly searchable. So, I need to consider all possible aspects in design. I was browsing archives & found: http://stackoverflow.com/questions/1981526/what-should-every-developer-know-about-databases http://stackoverflow.com/questions/621884/database-development-mistakes-made-by-app-developers I'm gonna keep the points mentioned in above answers in mind. What else should I know? What else should I keep in mind?

    Read the article

  • XML Doc to JSP to TIFF

    - by SPD
    We have around 100 word templates, every time user gets a business request he goes into shared folder select the template he/she wants and enter information and save it as tiff, these tiffs later processed by some batch program. I am trying to automate this process. So I defined an XML which has Template information like <Template id="1"> <Section id="1"> <fieldName id="1">Date</fieldName> <fieldValue></fieldValue> <fieldtype></fieldType> <fieldProperty>textField</fieldProperty> </Section> <Section id="2"> <fieldName id="2">Claim#</fieldName> <fieldValue></fieldValue> <fieldtype></fieldType> <fieldProperty>textField</fieldProperty> </Section> </Template> Based on the template values I generate the JSP on fly. Now I would like to generate a TIFF file out of it in specified format. I am not sure how to handle this requirement. *edited the original question.

    Read the article

  • Indexing/Performance strategies for vast amount of the same value

    - by DrColossos
    Base information: This is in context to the indexing process of OpenStreetMap data. To simplify the question: the core information is divided into 3 main types with value "W", "R", "N" (VARCHAR(1)). The table has somewhere around ~75M rows, all columns with "W" make up ~42M rows. Existing indexes are not relevant to this question. Now the question itself: The indexing of the data is done via an procedure. Inside this procedure, there are some loops that do the following: [...] SELECT * FROM table WHERE the_key = "W"; [...] The results get looped again and the above query itself is also in a loop. This takes a lot of time and slows down the process massivly. An indexon the_key is obviously useless since all the values that the index might use are the same ("W"). The script itself is running with a speed that is OK, only the SELECTing takes very long. Do I need to create a "special" kind of index that takes this into account and makes the SELECT quicker? If so, which one? need to tune some of the server parameters (they are already tuned and the result that they deliver seem to be good. If needed, I can post them)? have to live with the speed and simply get more hardware to gain more power (Tim Taylor grunt grunt)? Any alternatives to the above points (except rewriting it or not using it)?

    Read the article

  • why create CLSID_CaptureGraphBuilder2 instance always failed in a machine

    - by Yigang Wu
    It's a real strange issue, the machine information below is from DXDiag. There is no error reported, but create CLSID_CaptureGraphBuilder2 instance always failed in the machine. It's okay to create CLSID_FilterGraph. Before create CLSID_CaptureGraphBuilder2, I have called CoInitialize and created CLSID_FilterGraph. Only this machine has the error, what dll related with this interface or any function needed to call before to make it work? Thanks in advance. System Information Time of this report: 4/24/2010, 09:46:58 Machine name: TURION Operating System: Windows XP Home Edition (5.1, Build 2600) Service Pack 3 (2600.xpsp_sp3_qfe.100216-1510) Language: Japanese (Regional Setting: Japanese) System Manufacturer: To Be Filled By O.E.M. System Model: MS-7145 BIOS: Default System BIOS Processor: AMD Turion(tm) 64 Mobile Technology MT-30, MMX, 3DNow, ~1.6GHz Memory: 768MB RAM Page File: 376MB used, 1401MB available Windows Dir: C:\WINDOWS DirectX Version: DirectX 9.0c (4.09.0000.0904) DX Setup Parameters: Not found DxDiag Version: 5.03.2600.5512 32bit Unicode DxDiag Notes DirectX Files Tab: No problems found. Display Tab 1: No problems found. Sound Tab 1: No problems found. Sound Tab 2: No problems found. Music Tab: No problems found. Input Tab: No problems found. Network Tab: No problems found.

    Read the article

  • best practices question: How to save a collection of images and a java object in a single file? File

    - by Richard
    Hi all, I am making a java program that has a collection of flash-card like objects. I store the objects in a jtree composed of defaultmutabletreenodes. Each node has a user object attached to it with has a few string/native data type parameters. However, i also want each of these objects to have an image (typical formats, jpg, png etc). I would like to be able to store all of this information, including the images and the tree data to the disk in a single file so the file can be transferred between users and the entire tree, including the images and parameters for each object, can be reconstructed. I had not approached a problem like this before so I was not sure what the best practices were. I found XLMEncoder (http://java.sun.com/j2se/1.4.2/docs/api/java/beans/XMLEncoder.html) to be a very effective way of storing my tree and the native data type information. However I couldn't figure out how to save the image data itself inside of the XML file, and I'm not sure it is possible since the data is binary (so restricted characters would be invalid). My next thought was to associate a hash string instead of an image within each user object, and then gzip together all of the images, with the hash strings as the names and the XMLencoded tree in the same compmressed file. That seemed really contrived though. Does anyone know a good approach for this type of issue? THanks! Thanks!

    Read the article

< Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >