Search Results

Search found 14644 results on 586 pages for 'auto generate'.

Page 484/586 | < Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >

  • JAXB - Beans to XSD or XSD to beans?

    - by bajafresh4life
    I have an existing data model. I would like to express this data model in terms of XML. It looks like I have two options if I'm to use JAXB: Create an XSD that mirrors my data model, and use xjc to create binding objects. Marshalling and unmarshalling will involve creating a "mapping" class that would take my existing data objects and map them to the objects that xjc created. For example, in my data model I have a Doc class, and JAXB would create another Doc class with basically the same exact fields, and I would have to map from my Doc class to xjc's Doc class. Annotate my existing data model with JAXB annotations, and use schemagen to generate an XSD from my annotated classes. I can see advantanges and disadvantages of both approaches. It seems that most people using JAXB start with the XSD file. It makes sense that the XSD should be the gold standard truth, since it expresses the data model in a truly cross-platform way. I'm inclined to start with the XSD first, but it seems icky that I have to write and maintain a separate mapping class that shuttles data in between my world and JAXB world. Any recommendations?

    Read the article

  • Unit tests for deep cloning

    - by Will Dean
    Let's say I have a complex .NET class, with lots of arrays and other class object members. I need to be able to generate a deep clone of this object - so I write a Clone() method, and implement it with a simple BinaryFormatter serialize/deserialize - or perhaps I do the deep clone using some other technique which is more error prone and I'd like to make sure is tested. OK, so now (ok, I should have done it first) I'd like write tests which cover the cloning. All the members of the class are private, and my architecture is so good (!) that I haven't needed to write hundreds of public properties or other accessors. The class isn't IComparable or IEquatable, because that's not needed by the application. My unit tests are in a separate assembly to the production code. What approaches do people take to testing that the cloned object is a good copy? Do you write (or rewrite once you discover the need for the clone) all your unit tests for the class so that they can be invoked with either a 'virgin' object or with a clone of it? How would you test if part of the cloning wasn't deep enough - as this is just the kind of problem which can give hideous-to-find bugs later?

    Read the article

  • Correct escaping of delimited identifers in SQL Server without using QUOTENAME

    - by Ross Bradbury
    Is there anything else that the code must do to sanitize identifiers (table, view, column) other than to wrap them in double quotation marks and "double up" and double quotation marks present in the identifier name? References would be appreciated. I have inherited a code base that has a custom object-relational mapping (ORM) system. SQL cannot be written in the application but the ORM must still eventually generate the SQL to send to the SQL Server. All identifiers are quoted with double quotation marks. string QuoteName(string identifier) { return "\" + identifier.Replace("\"", "\"\"") + "\""; } If I were building this dynamic SQL in SQL, I would use the built-in SQL Server QUOTENAME function: declare @identifier nvarchar(128); set @identifier = N'Client"; DROP TABLE [dbo].Client; --'; declare @delimitedIdentifier nvarchar(258); set @delimitedIdentifier = QUOTENAME(@identifier, '"'); print @delimitedIdentifier; -- "Client""; DROP TABLE [dbo].Client; --" I have not found any definitive documentation about how to escape quoted identifiers in SQL Server. I have found Delimited Identifiers (Database Engine) and I also saw this stackoverflow question about sanitizing. If it were to have to call the QUOTENAME function just to quote the identifiers that is a lot of traffic to SQL Server that should not be needed. The ORM seems to be pretty well thought out with regards to SQL Injection. It is in C# and predates the nHibernate port and Entity Framework etc. All user input is sent using ADO.NET SqlParameter objects, it is just the identifier names that I am concerned about in this question. This needs to work on SQL Server 2005 and 2008.

    Read the article

  • Selecting a multi-dimensional array in LINQ

    - by mckhendry
    I have a task where I need to translate a DataTable to a two-dimensional array. That's easy enough to do by just looping over the rows and columns (see example below). private static string[,] ToArray(DataTable table) { var array = new string[table.Rows.Count,table.Columns.Count]; for (int i = 0; i < table.Rows.Count; ++i) for (int j = 0; j < table.Columns.Count; ++j) array[i, j] = table.Rows[i][j].ToString(); return array; } What I'd really like to do is use a select statement in LINQ to generate that 2D array. Unfortunately it looks like there is no way in LINQ to select a multidimensional array. Yes, I'm aware that I can use LINQ to select a jagged array, but that's not what I want. Is my assumption correct, or is there a way to use LINQ to select a multi-dimensional array?

    Read the article

  • Is there a way to effect user defined data types in MySQL?

    - by Dancrumb
    I have a database which stores (among other things), the following pieces of information: Hardware IDs BIGINTs Storage Capacities BIGINTs Hardware Names VARCHARs World Wide Port Names VARCHARs I'd like to be able to capture a more refined definition of these datatypes. For instance, the hardware IDs have no numerical significance, so I don't care how they are formatted when displayed. The Storage Capacities, however, are cardinal numbers and, at a user's request, I'd like to present them with thousands and decimal separators, e.g. 123,456.789. Thus, I'd like to refine BIGINT into, say ID_NUMBER and CARDINAL. The same with Hardware Names, which are simple text and WWPNs, which are hexstrings, e.g. 24:68:AC:E0. Thus, I'd like to refine VARCHAR into ENGLISH_WORD and HEXSTRING. The specific datatypes I made up are just for illustrative purposes. I'd like to keep all this information in one place and I'm wondering if anybody knows of a good way to hold this all in my MySQL table definitions. I could use the Comment field of the table definition, but that smells fishy to me. One approach would be to define the data structure elsewhere and use that definition to generate my CREATE TABLEs, but that would be a major rework of the code that I currently have, so I'm looking for alternatives. Any suggestions? The application language in use is Perl, if that helps.

    Read the article

  • how to fix: ctags null expansion of name pattern "\1"

    - by bua
    Hi, As the title points I have problem with ctags when trying to parse user-defined language. Basically I've followed those instructions. The quickest and easiest way to do this is by defining a new language using the program options. In order to have Swine support available every time I start ctags, I will place the following lines into the file $HOME/.ctags, which is read in every time ctags starts: --langdef=swine --langmap=swine:.swn --regex-swine=/^def[ \t]*([a-zA-Z0-9_]+)/\1/d,definition/ The first line defines the new language, the second maps a file extension to it, and the third defines a regular expression to identify a language definition and generate a tag file entry for it. I've tried different flags: b,e for regex. My definition of tag is: --regex-q=/^[ \t]*[^[:space:]]*[:space:]*:[:space:]*{/\l/f,function/b When I replace \1 with anything else (ascii caracter set ), It works. the output is: (--regex-q=/^[ \t]*[^[:space:]]*[:space:]*:[:space:]*{/my function name/f,function/b) !_TAG_FILE_FORMAT 2 /extended format; --format=1 will not append ;" to lines/ !_TAG_FILE_SORTED 1 /0=unsorted, 1=sorted, 2=foldcase/ !_TAG_PROGRAM_AUTHOR Darren Hiebert /[email protected]/ !_TAG_PROGRAM_NAME Exuberant Ctags // !_TAG_PROGRAM_URL http://ctags.sourceforge.net /official site/ !_TAG_PROGRAM_VERSION 5.8 // my function name file.q /^.ras.getLocation:{[u]$/;" f my function name file.q /^.a.getResource:{[u; pass]$/;" f my function name file.q /^.a.init:{$/;" f my function name file.q /^.a.kill:{[u; force]$/;" f my function name file.q /^.asdf.status:{[what; u]$/;" f my function name file.q /^.pc:{$/;" f Why \1 doesn't work? (I've tried all 1-9)

    Read the article

  • Credit card system implementation?

    - by Mark
    My site is going to have a credit system that basically works a lot like a credit card. Each user has an unlimited credit limit, but at the end of each week, they have to pay it off. For example, a user might make several purchases between March 1st and 7th, and then at the end of March 7th, they would be emailed an invoice that lists all their purchases during the week and a total that is due by the 14th. If they don't pay it off, their account is simply deactivated until they do. I'm just trying to wrap my head around how to implement this. I have a list of all their purchases, that's not a problem, but I'm just trying to figure out what to do with it. On the end of the 7th day, I could set up a cronjob to generate an invoice, which would basically have an id, and due date, and then I would need another many-to-many table to link all the purchases to the invoice. Then when a user adds money to their account, I guess it's applied against their current outstanding invoice? And what if they don't pay off their invoice by the time a new invoice rolls around, so now they have 2 outstanding ones, how do I know which to apply it against? Or do I make the cronjob check for any previous outstanding invoices, cancel them, and add a new item to the new invoice as "balance forward (+interest)"? How would you apply the money against an invoice? Would each payment have to be linked to an invoice, or could I just deposit it to their account credit, and then somehow figure out whats been paid and what hasn't? What if they pay in advance, before their invoice has been generated? Do I deduct it from their credit from the invoice upon generation, or at the end of the week when its due? There are so many ways to do this... Can anyone describe what approach they would take?

    Read the article

  • How can I remove duplicate nodes in XQuery?

    - by Brabster
    I have an XML document I generate on the fly, and I need a function to eliminate any duplicate nodes from it. My function looks like: declare function local:start2() { let $data := local:scan_books() return <books>{$data}</books> }; Sample output is: <books> <book> <title>XML in 24 hours</title> <author>Some Guy</author> </book> <book> <title>XML in 24 hours</title> <author>Some Guy</author> </book> </books> I want just the one entry in my books root tag, and there are other tags, like say pamphlet in there too that need to have duplicates removed. Any ideas? Updated following comments. By unique nodes, I mean remove multiple occurrences of nodes that have the exact same content and structure.

    Read the article

  • Idiomatic Scala way to deal with base vs derived class field names?

    - by Gregor Scheidt
    Consider the following base and derived classes in Scala: abstract class Base( val x : String ) final class Derived( x : String ) extends Base( "Base's " + x ) { override def toString = x } Here, the identifier 'x' of the Derived class parameter overrides the field of the Base class, so invoking toString like this: println( new Derived( "string" ).toString ) returns the Derived value and gives the result "string". So a reference to the 'x' parameter prompts the compiler to automatically generate a field on Derived, which is served up in the call to toString. This is very convenient usually, but leads to a replication of the field (I'm now storing the field on both Base and Derived), which may be undesirable. To avoid this replication, I can rename the Derived class parameter from 'x' to something else, like '_x': abstract class Base( val x : String ) final class Derived( _x : String ) extends Base( "Base's " + _x ) { override def toString = x } Now a call to toString returns "Base's string", which is what I want. Unfortunately, the code now looks somewhat ugly, and using named parameters to initialize the class also becomes less elegant: new Derived( _x = "string" ) There is also a risk of forgetting to give the derived classes' initialization parameters different names and inadvertently referring to the wrong field (undesirable since the Base class might actually hold a different value). Is there a better way? Edit: To clarify, I really only want the Base values; the Derived ones just seem necessary for initializing the Base ones. The example only references them to illustrate the ensuing issues. It might be nice to have a way to suppress automatic field generation if the derived class would otherwise end up hiding a base class field.

    Read the article

  • Write a compiler for a language that looks ahead and multiple files?

    - by acidzombie24
    In my language I can use a class variable in my method when the definition appears below the method. It can also call methods below my method and etc. There are no 'headers'. Take this C# example. class A { public void callMethods() { print(); B b; b.notYetSeen(); public void print() { Console.Write("v = {0}", v); } int v=9; } class B { public void notYetSeen() { Console.Write("notYetSeen()\n"); } } How should I compile that? what i was thinking is: pass1: convert everything to an AST pass2: go through all classes and build a list of define classes/variable/etc pass3: go through code and check if there's any errors such as undefined variable, wrong use etc and create my output But it seems like for this to work I have to do pass 1 and 2 for ALL files before doing pass3. Also it feels like a lot of work to do until I find a syntax error (other than the obvious that can be done at parse time such as forgetting to close a brace or writing 0xLETTERS instead of a hex value). My gut says there is some other way. Note: I am using bison/flex to generate my compiler.

    Read the article

  • Monotouch or Titanium for rapid application development on IPhone?

    - by Ronnie
    As a .Net developer I always dreamed for the possibility to develop with my existing skills (c#) applications for the Iphone. Both programs require a Mac and the Iphone Sdk installed. Appcelerator Titanium was the first app I tried and it is based on exposing some Iphone native api to javascript so that they can be called using that language. Monotouch starts at $399 for beeing able to deploy on the Iphone and not on the Iphone simulator while Titanium is free. Monotouch (Monodevelop) has an Ide that is currently missing in Titanium (but you can use any editor like Textmate, Aptana...) I think both program generate at the end a native precompiled app (also if I am not sure about the size of the final app on the Iphone as I think the .Net framework calls are prelilnked at compilation time in Monotouch). I am also not sure about the full coverage of all the Iphone api and features. Titanium has also the advantage to enable Android app development but as a c# developer I still find Monotouch experience more like the Visual Studio one. Witch one would you choose and what are your experiences on Monotouch and Titanium?

    Read the article

  • What AOP tools exist for doing aspect-oriented programming at the assembly language level against x8

    - by JohnnySoftware
    Looking for a tool I can use to do aspect-oriented programming at the assembly language level. For experimentation purposes, I would like the code weaver to operate native application level executable and dynamic link libraries. I have already done object-oriented AOP. I know assembly language for x86 and so forth. I would like to be able to do logging and other sorts of things using the familiar before/after/around constructs. I would like to be able to specify certain instructions or sequences/patterns of consecutive instructions as what to do a pointcut on since assembly/machine language is not exactly the most semantically rich computer language on the planet. If debugger and linker symbols are available, naturally, I would like to be able to use them to identify subroutines' entry points , branch/call/jump target addresses, symbolic data addresses, etc. I would like the ability to send notifications out to other diagnostic tools. Thus, support for sending data through connection-oriented sockets and datagrams is highly desirable. So is normal logging to files, UI, etc. This can be done using the action part of an aspect to make a function call, but then there are portability issues so the tool needs to support a flexible, well-abstracted logging/notifying mechanism with a clean, simple yet flexible. The goal is rapid-QA. The idea is to be able to share aspect source code braodly within communties as well as publicly. So, there needs to be a declarative security policy file that users can share. This insures that nothing untoward that is hidden directly or indirectly in an aspect source file slips by the execution manager. The policy file format needs to be simple to read, write, modify, understand, type-in, edit, and generate. Sort of like Java .policy files. Think the exact opposite of anything resembling XML Schema files and you get the idea. Is there such a tool in existence already?

    Read the article

  • Question about registering COM server and Add Reference to it in a C# project

    - by smwikipedia
    I build a COM server in raw C++, here is the procedure: (1) write an IDL file to define the interface and library. (2) use msidl.exe to compile the IDL file to necessary .h, .c, .tlb files. (3) implement the COM server in C++ and build a .dll file. (4) add the following registry entris: [HKEY_CLASSES_ROOT\RawComCarLib.ComCar.1\CurVer] @="RawComCarLib.ComCar.1" ;CLSID [HKEY_CLASSES_ROOT\CLSID{6CC26343-167B-4CF2-9EDF-99368A62E91C}] @="RawComCarLib.ComCar.1" [HKEY_CLASSES_ROOT\CLSID{6CC26343-167B-4CF2-9EDF-99368A62E91C}\InprocServer32] @="D:\com\Project01.dll" [HKEY_CLASSES_ROOT\CLSID{6CC26343-167B-4CF2-9EDF-99368A62E91C}\ProgID] @="RawComCarLib.ComCar.1" [HKEY_CLASSES_ROOT\CLSID{6CC26343-167B-4CF2-9EDF-99368A62E91C}\TypeLib] @="{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}" ;TypeLib [HKEY_CLASSES_ROOT\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}] [HKEY_CLASSES_ROOT\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}\1.0] @="Car Server Type Lib" [HKEY_CLASSES_ROOT\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}\1.0\0] [HKEY_CLASSES_ROOT\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}\1.0\0\win32] @="D:\com\Project01.tlb" [HKEY_CLASSES_ROOT\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}\1.0\FLAGS] @="0" [HKEY_LOCAL_MACHINE\SOFTWARE\Classes\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}\1.0\0\win32] @="C:\Windows\System32\msdatsrc.tlb" (5) I try to add reference to the COM by click the Add Reference in the C# project. (6) In the COM tab, I saw my "Car Server Type Lib", it's ok until now. I try to use the Object Browser to browse my COM lib, but the Visual Studio said "the following components could not be browsed", and I noticed that there's no new reference added to the list in the C# project Reference. I can use the tlbimp.exe to generate a interop.CarCom.dll, and then use the COM through this interop dll, but I want this interop assembly to be generated automatically when I just add reference to the COM. Could someone tell me what's wrong? Many thanks.

    Read the article

  • Invalidating Memcached Keys on save() in Django

    - by Zack
    I've got a view in Django that uses memcached to cache data for the more highly trafficked views that rely on a relatively static set of data. The key word is relatively: I need invalidate the memcached key for that particular URL's data when it's changed in the database. To be as clear as possible, here's the meat an' potatoes of the view (Person is a model, cache is django.core.cache.cache): def person_detail(request, slug): if request.is_ajax(): cache_key = "%s_ABOUT_%s" % settings.SITE_PREFIX, slug # Check the cache to see if we've already got this result made. json_dict = cache.get(cache_key) # Was it a cache hit? if json_dict is None: # That's a negative Ghost Rider person = get_object_or_404(Person, display = True, slug = slug) json_dict = { 'name' : person.name, 'bio' : person.bio_html, 'image' : person.image.extra_thumbnails['large'].absolute_url, } cache.set(cache_key) # json_dict will now exist, whether it's from the cache or not response = HttpResponse() response['Content-Type'] = 'text/javascript' response.write(simpljson.dumps(json_dict)) # Make sure it's all properly formatted for JS by using simplejson return response else: # This is where the fully templated response is generated What I want to do is get at that cache_key variable in it's "unformatted" form, but I'm not sure how to do this--if it can be done at all. Just in case there's already something to do this, here's what I want to do with it (this is from the Person model's hypothetical save method) def save(self): # If this is an update, the key will be cached, otherwise it won't, let's see if we can't find me try: old_self = Person.objects.get(pk=self.id) cache_key = # Voodoo magic to get that variable old_key = cache_key.format(settings.SITE_PREFIX, old_self.slug) # Generate the key currently cached cache.delete(old_key) # Hit it with both barrels of rock salt # Turns out this doesn't already exist, let's make that first request even faster by making this cache right now except DoesNotExist: # I haven't gotten to this yet. super(Person, self).save() I'm thinking about making a view class for this sorta stuff, and having functions in it like remove_cache or generate_cache since I do this sorta stuff a lot. Would that be a better idea? If so, how would I call the views in the URLconf if they're in a class?

    Read the article

  • Crystal Reports - export to pdf in MVC

    - by BhejaFry
    Hi folks, I have integrated the below code in my application to generate a 'pdf' file using crystal reports in MVC project. However, after the request is processed, i get to see only 2 pages in the pdf file while my 'data' returns more than 2 records. Also, the pdf isn't rendered as soon as the page is processed but instead i have to refresh atleast once, then the pdf is rendered on the browser. using CrystalDecisions.CrystalReports.Engine; public FileStreamResult Report() { ReportClass rptH = new ReportClass(); List<sampledataset> data = objdb.getdataset(); rptH.FileName = Server.MapPath("[reportName].rpt"); rptH.Load(); rptH.SetDatabaseLogon("un", "pwd", "server", "db"); rptH.SetDataSource(data); Stream stream = rptH.ExportToStream(CrystalDecisions.Shared.ExportFormatType.PortableDocFormat); stream.Seek(0, System.IO.SeekOrigin.Begin); return new FileStreamResult(stream, "application/pdf"); } I took the code from here in SO but modified it like above. TIA.

    Read the article

  • Throttling outbound API calls generated by a Rails app

    - by Sharpie
    I am not a professional web developer, but I like to wrench on websites as a hobby. Recently, I have been playing with developing a Rails app as a project to help me learn the framework. The goal of my toy app is to harvest data from another service through their API and make it available for me to query using a search function. However, the service I want to pull data from imposes a rate limit on the number of API calls that may be executed per minute. I plan on having my app run a daily update which may generate a burst of API calls that far exceeds the limit provided by the external service. I wish to respect the performance of the external site and so would like to throttle the rate at which my app executes the calls. I have done a little bit of searching and the overwhelming amount of tutorial material and pre-built libraries I have found cover throttling inbound API calls to a web app and I can find little discussion of controlling the flow of outbound calls. Being both an amateur web developer and a rails newbie, it is entirely possible that I have been executing the wrong searches in the wrong places. Therefore my questions are: Is there a nice website out there aggregating Rails tutorials that has material related to throttling outbound API requests? Are there any ruby gems or other libraries that would help me throttle the requests? I have some ideas of how I might go about writing a throttling system using a queue-based worker like DelayedJob or Resque to manage the API calls, but I would rather spend my weekends building the rest of the site if there is a good pre-built solution out there already.

    Read the article

  • Emulating a web browser

    - by Sean
    Hello, we are tasked with basically emulating a browser to fetch webpages, looking to automate tests on different web pages. This will be used for (ideally) console-ish applications that run in the background and generate reports. We tried going with .NET and the WatiN library, but it was built on a Marshalled IE, and so it lacked many features that we hacked in with calls to unmanaged native code, but at the end of the day IE is not thread safe nor process safe, and many of the needed features could only be implemented by changing registry values and it was just terribly unflexible. Proxy support JavaScript support- we have to be able to parse the actual DOM after any javascript has executed (and hopefully an event is raised to handle any ajax calls) Ability to save entire contents of page including images FROM THE loaded page's CACHE to a separate location ability to clear cookies/cache, get the cookies/cache, etc. Ability to set headers and alter post data for any browser call And for the love of drogs, an API that isn't completely cryptic Languages acceptable C++, C#, Python, anything that can be a simple little console application that doesn't have a retarded syntax like Ruby. From my own research, and believe me I am terrible at google searches, I have heard good things about WebKit... would the Qt module QtWebKit handle all these features?

    Read the article

  • How to sort the file names in bash in this circumstance?

    - by Nicolas
    I have run a program to generate some results with the different parameters(i.e. the R, C and RP). These results are saved in files named results.txt. Then, I should parse these experimental results to make an analysis. In the params_R_7_C_16_RP_0, the 7 is the value of the parameter R, the 16 is the value of the parameter C and the 0 is the value of the parameter RP. Now, I want to get these results.txt files in the current directory to parse, and sort the path with the parameter values of R,C and RP. I first use the following command to get the results.txt files that I want to parse: find ./ -name "results.txt" and the output is: ./params_R_11_C_9_RP_0/results.txt ./params_R_7_C_9_RP_0/results.txt ./params_R_7_C_4_RP_0/results.txt ./params_R_11_C_16_RP_0/results.txt ./params_R_9_C_4_RP_0/results.txt ./params_R_5_C_9_RP_0/results.txt ./params_R_9_C_25_RP_0/results.txt ./params_R_7_C_16_RP_0/results.txt ./params_R_5_C_25_RP_0/results.txt ./params_R_5_C_16_RP_0/results.txt ./params_R_11_C_4_RP_0/results.txt ./params_R_9_C_16_RP_0/results.txt ./params_R_7_C_25_RP_0/results.txt ./params_R_15_C_4_RP_0/results.txt ./params_R_5_C_4_RP_0/results.txt ./params_R_9_C_9_RP_0/results.txt and I change the command as follows: find ./ -name "results.txt" | sort and the output is: ./params_R_11_C_16_RP_0/results.txt ./params_R_11_C_25_RP_0/results.txt ./params_R_11_C_4_RP_0/results.txt ./params_R_11_C_9_RP_0/results.txt ./params_R_5_C_16_RP_0/results.txt ./params_R_5_C_25_RP_0/results.txt ./params_R_5_C_4_RP_0/results.txt ./params_R_5_C_9_RP_0/results.txt ./params_R_7_C_16_RP_0/results.txt ./params_R_7_C_25_RP_0/results.txt ./params_R_7_C_4_RP_0/results.txt ./params_R_7_C_9_RP_0/results.txt ./params_R_9_C_16_RP_0/results.txt ./params_R_9_C_25_RP_0/results.txt ./params_R_9_C_4_RP_0/results.txt ./params_R_9_C_9_RP_0/results.txt But I want it output as following: ./params_R_5_C_4_RP_0/results.txt ./params_R_5_C_9_RP_0/results.txt ./params_R_5_C_16_RP_0/results.txt ./params_R_5_C_25_RP_0/results.txt ./params_R_7_C_4_RP_0/results.txt ./params_R_7_C_9_RP_0/results.txt ./params_R_7_C_16_RP_0/results.txt ./params_R_7_C_25_RP_0/results.txt ./params_R_9_C_4_RP_0/results.txt ./params_R_9_C_9_RP_0/results.txt ./params_R_9_C_16_RP_0/results.txt ./params_R_9_C_25_RP_0/results.txt ... I should let it params_R_005_C_004_RP_0 when generating the results. But it would take much time to rerun the program to get the results. So I wonder if there is any way to use the bash command to achieve this objective.

    Read the article

  • Making .NET assembly COM-visible and working for VB5

    - by Cyberherbalist
    I have an assembly which I have managed to make visible to VB6 and it works, but having a problem accomplishing the same thing with VB5. For VB6, I have built the assembly, made it COM-visible, registered it as a COM object etc., and the assembly shows in VB6's References list, and allows me to use it successfully. The Object Browser also shows the method in the assy. I copied the assembly and its TLB to a virtual workstation used for VB5 development, and ran Regasm, apparently successfully: C:\>C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727 \regasm arserviceinterface.dll /tlb:arserviceinterface.tlb Microsoft (R) .NET Framework Assembly Registration Utility 2.0.50727.3053 Copyright (C) Microsoft Corporation 1998-2004. All rights reserved. Assembly exported to 'C:\Projects\AR\3rd Party\ARService\arserviceinterface.tlb' , and the type library was registered successfully Note that the virtual W/S is Win2k and does not have .NET Fx 3.5 on it, just 2.0. The assembly shows up in the References that can be selected in VB5, but the method of the assembly doesn't show up in the Object Browser, and it is generally unusable. Either there is a step to do that I haven't done, or VB5 doesn't know how to use such a COM object. Note that the VB5 setup is on a virtual workstation, not the same workstation that VB6 is installed on. Any ideas? One thing that occurred to me is that I might need to generate and use a strong name on the workstation in question, but...

    Read the article

  • Excel Reader ASP.NET

    - by user304429
    I declared a DataGrid in a ASP.NET View and I'd like to generate some C# code to populate said DataGrid with an Excel spreadsheet (.xlsx). Here's the code I have: <asp:DataGrid id="DataGrid1" runat="server"/> <script language="C#" runat="server"> protected void Page_Load(object sender, EventArgs e) { string connString = @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=c:\FileName.xlsx;Extended Properties=""Excel 12.0;HDR=YES;"""; // Create the connection object OleDbConnection oledbConn = new OleDbConnection(connString); try { // Open connection oledbConn.Open(); // Create OleDbCommand object and select data from worksheet Sheet1 OleDbCommand cmd = new OleDbCommand("SELECT * FROM [sheetname$]", oledbConn); // Create new OleDbDataAdapter OleDbDataAdapter oleda = new OleDbDataAdapter(); oleda.SelectCommand = cmd; // Create a DataSet which will hold the data extracted from the worksheet. DataSet ds = new DataSet(); // Fill the DataSet from the data extracted from the worksheet. oleda.Fill(ds, "Something"); // Bind the data to the GridView DataGrid1.DataSource = ds.Tables[0].DefaultView; DataGrid1.DataBind(); } catch { } finally { // Close connection oledbConn.Close(); } } </script> When I run the website, nothing really happens. What gives?

    Read the article

  • Create variable names using a loop in Java?

    - by SeerUK
    Hi, first time poster, long time reader so be gentle with me :) See the following code which works to generate me timestamps for the beginning and end of every month in a financial year. int year = 2010; // Financial year runs from Sept-Aug so earlyMonths are those where year = FY-1 and lateMonths are those where year = FY int[] earlyMonths = {8, 9, 10, 11}; // Sept to Dec int earlyYear = year -1; for (int i : earlyMonths) { month = i; Calendar cal = Calendar.getInstance(); cal.clear(); cal.set(earlyYear,month,1,0,0,0); Long start = cal.getTimeInMillis(); cal.clear(); cal.set(earlyYear,month,1); lastDayofMonth = cal.getActualMaximum(GregorianCalendar.DAY_OF_MONTH); cal.set(earlyYear,month,lastDayofMonth,23,59,59); Long end = cal.getTimeInMillis(); } int[] lateMonths = {0, 1, 2, 3, 4, 5, 6, 7}; // Jan to Aug for (int i : lateMonths) { month = i; Calendar cal = Calendar.getInstance(); cal.clear(); cal.set(year,month,1,0,0,0); Long start = cal.getTimeInMillis(); cal.clear(); cal.set(year,month,1); lastDayofMonth = cal.getActualMaximum(GregorianCalendar.DAY_OF_MONTH); cal.set(year,month,lastDayofMonth,23,59,59); Long end = cal.getTimeInMillis(); } So far so good, but in order to use these results I need these timestamps to be output to variables named by month (to be used in a prepared statement later in the code. e.g. SeptStart = sometimestamp, SeptEnd = some timestamp etc etc. I don't know if it is possible to declare new variables based on the results of each loop. Any ideas?

    Read the article

  • Convert function to read from string instead of file in C

    - by Dusty
    I've been tasked with updating a function which currently reads in a configuration file from disk and populates a structure: static int LoadFromFile(FILE *Stream, ConfigStructure *cs) { int tempInt; ... if ( fscanf( Stream, "Version: %d\n",&tempInt) != 1 ) { printf("Unable to read version number\n"); return 0; } cs->Version = tempInt; ... } to one which allows us to bypass writing the configuration to disk and instead pass it directly in memory, roughly equivalent to this: static int LoadFromString(char *Stream, ConfigStructure *cs) A few things to note: The current LoadFromFile function is incredibly dense and complex, reading dozens of versions of the config file in a backward compatible manner, which makes duplication of the overall logic quite a pain. The functions that generate the config file and those that read it originate in totally different parts of the old system and therefore don't share any data structures so I can't pass those directly. I could potentially write a wrapper, but again, it would need to handle any structure passed in in a backwards compatible manner. I'm tempted to just pass the file as is in as a string (as in the prototype above) and convert all the fscanf's to sscanf's but then I have to handle incrementing the pointer along (and potentially dealing with buffer overrun errors) manually. This has to remain in C, so no C++ functionality like streams can help here Am I missing a better option? Is there some way to create a FILE * that actually just points to a location in memory instead of on disk? Any pointers, suggestions or other help is greatly appreciated.

    Read the article

  • PLKs and Web Service Software Factory

    - by Nix
    We found a bug in Web Service Software Factory a description can be found here. There has been no updates on it so we decided to download the code and fix it ourself. Very simple bug and we patched it with maybe 3 lines of code. However* we have now tried to repackage it and use it and are finding that this is seemingly an impossible process. Can someone please explain to me the process of PLKs? I have read all about them but still don't understand what is really required to distribute a VS package. I was able to get it to load and run using a PLK obtained from here, but i am assuming that you have to be a partner to get a functional PLK that will be recognized on other peoples systems? Every time i try and install this on a different computer I get a "Package Load Failure". Is the reason I am getting errors because I am not using a partner key? Is there any other way around this? For instance is there any way we can have an "internal" VS package that we can distribute? Edit Files I had to change to get it to work. First run devenv PostInstall.proj Generate your plks and replace ##Package PLK## (.resx files) --Just note that package version is not the class name but is "Web Service Software Factory: Modeling Edition" -- And you need to remove the new lines from the key ProductDefinitionRegistryFragment.wxi line 1252(update version to whatever version you used in plk) Uncomment all // [VSShell::ProvideLoadKey("Standard", Constant in .tt files.

    Read the article

  • change custom mapping - sharp architecture/ fluent nhibernate

    - by csetzkorn
    I am using the sharp architecture which also deploys FNH. The db schema sql code is generated during the testing like this: [TestFixture] [Category("DB Tests")] public class MappingIntegrationTests { [SetUp] public virtual void SetUp() { string[] mappingAssemblies = RepositoryTestsHelper.GetMappingAssemblies(); configuration = NHibernateSession.Init( new SimpleSessionStorage(), mappingAssemblies, new AutoPersistenceModelGenerator().Generate(), "../../../../app/XXX.Web/NHibernate.config"); } [TearDown] public virtual void TearDown() { NHibernateSession.CloseAllSessions(); NHibernateSession.Reset(); } [Test] public void CanConfirmDatabaseMatchesMappings() { var allClassMetadata = NHibernateSession.GetDefaultSessionFactory().GetAllClassMetadata(); foreach (var entry in allClassMetadata) { NHibernateSession.Current.CreateCriteria(entry.Value.GetMappedClass(EntityMode.Poco)) .SetMaxResults(0).List(); } } /// <summary> /// Generates and outputs the database schema SQL to the console /// </summary> [Test] public void CanGenerateDatabaseSchema() { System.IO.TextWriter writeFile = new StreamWriter(@"d:/XXXSqlCreate.sql"); var session = NHibernateSession.GetDefaultSessionFactory().OpenSession(); new SchemaExport(configuration).Execute(true, false, false, session.Connection, writeFile); } private Configuration configuration; } I am trying to use: using FluentNHibernate.Automapping; using xxx.Core; using SharpArch.Data.NHibernate.FluentNHibernate; using FluentNHibernate.Automapping.Alterations; namespace xxx.Data.NHibernateMaps { public class x : IAutoMappingOverride<x> { public void Override(AutoMapping<Tx> mapping) { mapping.Map(x => x.text, "text").CustomSqlType("varchar(max)"); mapping.Map(x => x.url, "url").CustomSqlType("varchar(max)"); } } } To change the standard mapping of strings from NVARCHAR(255) to varchar(max). This is not picked up during the sql schema generation. I also tried: mapping.Map(x = x.text, "text").Length(100000); Any ideas? Thanks. Christian

    Read the article

  • intelligent path truncation/ellipsis for display

    - by peterchen
    I am looking for an existign path truncation algorithm (similar to what the Win32 static control does with SS_PATHELLIPSIS) for a set of paths that should focus on the distinct elements. For example, if my paths are like this: Unit with X/Test 3V/ Unit with X/Test 4V/ Unit with X/Test 5V/ Unit without X/Test 3V/ Unit without X/Test 6V/ Unit without X/2nd Test 6V/ When not enough display space is available, they should be truncated to something like this: ...with X/...3V/ ...with X/...4V/ ...with X/...5V/ ...without X/...3V/ ...without X/...6V/ ...without X/2nd ...6V/ (Assuming that an ellipsis generally is shorter than three letters). This is just an example of a rather simple, ideal case (e.g. they'd all end up at different lengths now, and I wouldn't know how to create a good suggestion when a path "Thingie/Long Test/" is added to the pool). There is no given structure of the path elements, they are assigned by the user, but often items will have similar segments. It should work for proportional fonts, so the algorithm should take a measure function (and not call it to heavily) or generate a suggestion list. Data-wise, a typical use case would contain 2..4 path segments anf 20 elements per segment. I am looking for previous attempts into that direction, and if that's solvable wiht sensible amount of code or dependencies.

    Read the article

< Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >