Search Results

Search found 7897 results on 316 pages for 'generate'.

Page 245/316 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • PLKs and Web Service Software Factory

    - by Nix
    We found a bug in Web Service Software Factory a description can be found here. There has been no updates on it so we decided to download the code and fix it ourself. Very simple bug and we patched it with maybe 3 lines of code. However* we have now tried to repackage it and use it and are finding that this is seemingly an impossible process. Can someone please explain to me the process of PLKs? I have read all about them but still don't understand what is really required to distribute a VS package. I was able to get it to load and run using a PLK obtained from here, but i am assuming that you have to be a partner to get a functional PLK that will be recognized on other peoples systems? Every time i try and install this on a different computer I get a "Package Load Failure". Is the reason I am getting errors because I am not using a partner key? Is there any other way around this? For instance is there any way we can have an "internal" VS package that we can distribute? Edit Files I had to change to get it to work. First run devenv PostInstall.proj Generate your plks and replace ##Package PLK## (.resx files) --Just note that package version is not the class name but is "Web Service Software Factory: Modeling Edition" -- And you need to remove the new lines from the key ProductDefinitionRegistryFragment.wxi line 1252(update version to whatever version you used in plk) Uncomment all // [VSShell::ProvideLoadKey("Standard", Constant in .tt files.

    Read the article

  • intelligent path truncation/ellipsis for display

    - by peterchen
    I am looking for an existign path truncation algorithm (similar to what the Win32 static control does with SS_PATHELLIPSIS) for a set of paths that should focus on the distinct elements. For example, if my paths are like this: Unit with X/Test 3V/ Unit with X/Test 4V/ Unit with X/Test 5V/ Unit without X/Test 3V/ Unit without X/Test 6V/ Unit without X/2nd Test 6V/ When not enough display space is available, they should be truncated to something like this: ...with X/...3V/ ...with X/...4V/ ...with X/...5V/ ...without X/...3V/ ...without X/...6V/ ...without X/2nd ...6V/ (Assuming that an ellipsis generally is shorter than three letters). This is just an example of a rather simple, ideal case (e.g. they'd all end up at different lengths now, and I wouldn't know how to create a good suggestion when a path "Thingie/Long Test/" is added to the pool). There is no given structure of the path elements, they are assigned by the user, but often items will have similar segments. It should work for proportional fonts, so the algorithm should take a measure function (and not call it to heavily) or generate a suggestion list. Data-wise, a typical use case would contain 2..4 path segments anf 20 elements per segment. I am looking for previous attempts into that direction, and if that's solvable wiht sensible amount of code or dependencies.

    Read the article

  • change custom mapping - sharp architecture/ fluent nhibernate

    - by csetzkorn
    I am using the sharp architecture which also deploys FNH. The db schema sql code is generated during the testing like this: [TestFixture] [Category("DB Tests")] public class MappingIntegrationTests { [SetUp] public virtual void SetUp() { string[] mappingAssemblies = RepositoryTestsHelper.GetMappingAssemblies(); configuration = NHibernateSession.Init( new SimpleSessionStorage(), mappingAssemblies, new AutoPersistenceModelGenerator().Generate(), "../../../../app/XXX.Web/NHibernate.config"); } [TearDown] public virtual void TearDown() { NHibernateSession.CloseAllSessions(); NHibernateSession.Reset(); } [Test] public void CanConfirmDatabaseMatchesMappings() { var allClassMetadata = NHibernateSession.GetDefaultSessionFactory().GetAllClassMetadata(); foreach (var entry in allClassMetadata) { NHibernateSession.Current.CreateCriteria(entry.Value.GetMappedClass(EntityMode.Poco)) .SetMaxResults(0).List(); } } /// <summary> /// Generates and outputs the database schema SQL to the console /// </summary> [Test] public void CanGenerateDatabaseSchema() { System.IO.TextWriter writeFile = new StreamWriter(@"d:/XXXSqlCreate.sql"); var session = NHibernateSession.GetDefaultSessionFactory().OpenSession(); new SchemaExport(configuration).Execute(true, false, false, session.Connection, writeFile); } private Configuration configuration; } I am trying to use: using FluentNHibernate.Automapping; using xxx.Core; using SharpArch.Data.NHibernate.FluentNHibernate; using FluentNHibernate.Automapping.Alterations; namespace xxx.Data.NHibernateMaps { public class x : IAutoMappingOverride<x> { public void Override(AutoMapping<Tx> mapping) { mapping.Map(x => x.text, "text").CustomSqlType("varchar(max)"); mapping.Map(x => x.url, "url").CustomSqlType("varchar(max)"); } } } To change the standard mapping of strings from NVARCHAR(255) to varchar(max). This is not picked up during the sql schema generation. I also tried: mapping.Map(x = x.text, "text").Length(100000); Any ideas? Thanks. Christian

    Read the article

  • writing a web service with dynamically determined web methods

    - by quillbreaker
    Let's say I have a text file of basic mathematical functions. I want to make a web service that answers these mathematical functions. Say the first one is y=x*x. If I wanted to turn this into a web service, I could simply do this: [WebMethod] public int A(int x) { return x*x; } However, I've extracted the function from the list by hand and coded it into a function by hand. That's not what I want to do. I want the wsdl for the service to be generated at call time directly from the text file, and I want the web method calls to the service to go to a specific method that also parses the text file at run time. How much heavy lifting is this? I've found a sample on how to generate WSDLs dynamically at this link, but there's a lot more to do beyond that and I don't want to bark up this tree if there are parts of the project that arn't feasible. Does anyone have any links, guides, books, or positive experiences trying this kind of thing?

    Read the article

  • Programming Technique: How to create a simple card game

    - by Shyam
    Hi, As I am learning the Ruby language, I am getting closer to actual programming. So I was thinking of creating a simple card game. My question isn't Ruby orientated, but I do know want to learn how to solve this problem with a genuine OOP approach. In my card game I want to have four players. Using a standard deck with 52 cards, no jokers/wildcards. In the game I won't use the Ace as a dual card, it is always the highest card. So, the programming problems I wonder about are the following: How can I sort/randomize the deck of cards? There are four types, each having 13 values. Eventually there can be only unique values, so picking random values could generate duplicates. How can I implement a simple AI? As there are tons of card games, someone would have figured this part out already, so references would be great. I am a truly Ruby nuby, and my goal here is to learn to solve problems, so pseudo code would be great, just to understand how to solve the problem programmatically. I apologize for my grammar and writing style if it's unclear, for it is not my native language. Also pointers to sites where such challenges are explained, would be a great resource! Thank you for your comments, answers and feedback!

    Read the article

  • Optimized OCR black/white pixel algorithm

    - by eagle
    I am writing a simple OCR solution for a finite set of characters. That is, I know the exact way all 26 letters in the alphabet will look like. I am using C# and am able to easily determine if a given pixel should be treated as black or white. I am generating a matrix of black/white pixels for every single character. So for example, the letter I (capital i), might look like the following: 01110 00100 00100 00100 01110 Note: all points, which I use later in this post, assume that the top left pixel is (0, 0), bottom right pixel is (4, 4). 1's represent black pixels, and 0's represent white pixels. I would create a corresponding matrix in C# like this: CreateLetter("I", new List<List<bool>>() { new List<bool>() { false, true, true, true, false }, new List<bool>() { false, false, true, false, false }, new List<bool>() { false, false, true, false, false }, new List<bool>() { false, false, true, false, false }, new List<bool>() { false, true, true, true, false } }); I know I could probably optimize this part by using a multi-dimensional array instead, but let's ignore that for now, this is for illustrative purposes. Every letter is exactly the same dimensions, 10px by 11px (10px by 11px is the actual dimensions of a character in my real program. I simplified this to 5px by 5px in this posting since it is much easier to "draw" the letters using 0's and 1's on a smaller image). Now when I give it a 10px by 11px part of an image to analyze with OCR, it would need to run on every single letter (26) on every single pixel (10 * 11 = 110) which would mean 2,860 (26 * 110) iterations (in the worst case) for every single character. I was thinking this could be optimized by defining the unique characteristics of every character. So, for example, let's assume that the set of characters only consists of 5 distinct letters: I, A, O, B, and L. These might look like the following: 01110 00100 00100 01100 01000 00100 01010 01010 01010 01000 00100 01110 01010 01100 01000 00100 01010 01010 01010 01000 01110 01010 00100 01100 01110 After analyzing the unique characteristics of every character, I can significantly reduce the number of tests that need to be performed to test for a character. For example, for the "I" character, I could define it's unique characteristics as having a black pixel in the coordinate (3, 0) since no other characters have that pixel as black. So instead of testing 110 pixels for a match on the "I" character, I reduced it to a 1 pixel test. This is what it might look like for all these characters: var LetterI = new OcrLetter() { Name = "I", BlackPixels = new List<Point>() { new Point (3, 0) } } var LetterA = new OcrLetter() { Name = "A", WhitePixels = new List<Point>() { new Point(2, 4) } } var LetterO = new OcrLetter() { Name = "O", BlackPixels = new List<Point>() { new Point(3, 2) }, WhitePixels = new List<Point>() { new Point(2, 2) } } var LetterB = new OcrLetter() { Name = "B", BlackPixels = new List<Point>() { new Point(3, 1) }, WhitePixels = new List<Point>() { new Point(3, 2) } } var LetterL = new OcrLetter() { Name = "L", BlackPixels = new List<Point>() { new Point(1, 1), new Point(3, 4) }, WhitePixels = new List<Point>() { new Point(2, 2) } } This is challenging to do manually for 5 characters and gets much harder the greater the amount of letters that are added. You also want to guarantee that you have the minimum set of unique characteristics of a letter since you want it to be optimized as much as possible. I want to create an algorithm that will identify the unique characteristics of all the letters and would generate similar code to that above. I would then use this optimized black/white matrix to identify characters. How do I take the 26 letters that have all their black/white pixels filled in (e.g. the CreateLetter code block) and convert them to an optimized set of unique characteristics that define a letter (e.g. the new OcrLetter() code block)? And how would I guarantee that it is the most efficient definition set of unique characteristics (e.g. instead of defining 6 points as the unique characteristics, there might be a way to do it with 1 or 2 points, as the letter "I" in my example was able to). An alternative solution I've come up with is using a hash table, which will reduce it from 2,860 iterations to 110 iterations, a 26 time reduction. This is how it might work: I would populate it with data similar to the following: Letters["01110 00100 00100 00100 01110"] = "I"; Letters["00100 01010 01110 01010 01010"] = "A"; Letters["00100 01010 01010 01010 00100"] = "O"; Letters["01100 01010 01100 01010 01100"] = "B"; Now when I reach a location in the image to process, I convert it to a string such as: "01110 00100 00100 00100 01110" and simply find it in the hash table. This solution seems very simple, however, this still requires 110 iterations to generate this string for each letter. In big O notation, the algorithm is the same since O(110N) = O(2860N) = O(N) for N letters to process on the page. However, it is still improved by a constant factor of 26, a significant improvement (e.g. instead of it taking 26 minutes, it would take 1 minute). Update: Most of the solutions provided so far have not addressed the issue of identifying the unique characteristics of a character and rather provide alternative solutions. I am still looking for this solution which, as far as I can tell, is the only way to achieve the fastest OCR processing. I just came up with a partial solution: For each pixel, in the grid, store the letters that have it as a black pixel. Using these letters: I A O B L 01110 00100 00100 01100 01000 00100 01010 01010 01010 01000 00100 01110 01010 01100 01000 00100 01010 01010 01010 01000 01110 01010 00100 01100 01110 You would have something like this: CreatePixel(new Point(0, 0), new List<Char>() { }); CreatePixel(new Point(1, 0), new List<Char>() { 'I', 'B', 'L' }); CreatePixel(new Point(2, 0), new List<Char>() { 'I', 'A', 'O', 'B' }); CreatePixel(new Point(3, 0), new List<Char>() { 'I' }); CreatePixel(new Point(4, 0), new List<Char>() { }); CreatePixel(new Point(0, 1), new List<Char>() { }); CreatePixel(new Point(1, 1), new List<Char>() { 'A', 'B', 'L' }); CreatePixel(new Point(2, 1), new List<Char>() { 'I' }); CreatePixel(new Point(3, 1), new List<Char>() { 'A', 'O', 'B' }); // ... CreatePixel(new Point(2, 2), new List<Char>() { 'I', 'A', 'B' }); CreatePixel(new Point(3, 2), new List<Char>() { 'A', 'O' }); // ... CreatePixel(new Point(2, 4), new List<Char>() { 'I', 'O', 'B', 'L' }); CreatePixel(new Point(3, 4), new List<Char>() { 'I', 'A', 'L' }); CreatePixel(new Point(4, 4), new List<Char>() { }); Now for every letter, in order to find the unique characteristics, you need to look at which buckets it belongs to, as well as the amount of other characters in the bucket. So let's take the example of "I". We go to all the buckets it belongs to (1,0; 2,0; 3,0; ...; 3,4) and see that the one with the least amount of other characters is (3,0). In fact, it only has 1 character, meaning it must be an "I" in this case, and we found our unique characteristic. You can also do the same for pixels that would be white. Notice that bucket (2,0) contains all the letters except for "L", this means that it could be used as a white pixel test. Similarly, (2,4) doesn't contain an 'A'. Buckets that either contain all the letters or none of the letters can be discarded immediately, since these pixels can't help define a unique characteristic (e.g. 1,1; 4,0; 0,1; 4,4). It gets trickier when you don't have a 1 pixel test for a letter, for example in the case of 'O' and 'B'. Let's walk through the test for 'O'... It's contained in the following buckets: // Bucket Count Letters // 2,0 4 I, A, O, B // 3,1 3 A, O, B // 3,2 2 A, O // 2,4 4 I, O, B, L Additionally, we also have a few white pixel tests that can help: (I only listed those that are missing at most 2). The Missing Count was calculated as (5 - Bucket.Count). // Bucket Missing Count Missing Letters // 1,0 2 A, O // 1,1 2 I, O // 2,2 2 O, L // 3,4 2 O, B So now we can take the shortest black pixel bucket (3,2) and see that when we test for (3,2) we know it is either an 'A' or an 'O'. So we need an easy way to tell the difference between an 'A' and an 'O'. We could either look for a black pixel bucket that contains 'O' but not 'A' (e.g. 2,4) or a white pixel bucket that contains an 'O' but not an 'A' (e.g. 1,1). Either of these could be used in combination with the (3,2) pixel to uniquely identify the letter 'O' with only 2 tests. This seems like a simple algorithm when there are 5 characters, but how would I do this when there are 26 letters and a lot more pixels overlapping? For example, let's say that after the (3,2) pixel test, it found 10 different characters that contain the pixel (and this was the least from all the buckets). Now I need to find differences from 9 other characters instead of only 1 other character. How would I achieve my goal of getting the least amount of checks as possible, and ensure that I am not running extraneous tests?

    Read the article

  • How to refresh jasper image without flickering?

    - by Aru
    Hi, I am using jasper reports to generate graph. I am refresheing the graph. But the problem is while refreshing the graph it is flickering. Code is- PrintWriter out = response.getWriter(); response.setContentType("text/html"); JRDataSource dataSource = createReportDataSource(perfArrayListSample.toArray()); ServletContext context = this.getServletConfig().getServletContext(); File reportFile = new File(context.getRealPath("/cpuUsageGraph.jasper")); if (!reportFile.exists()) throw new JRRuntimeException("File cpuUsageGraph.jasper not found. The report design must be compiled first."); JasperReport jasperReport = (JasperReport)JRLoader.loadObject(reportFile.getPath()); JasperPrint print = JasperFillManager.fillReport(jasperReport, new HashMap(), dataSource); JRHtmlExporter exporter = new JRHtmlExporter(); request.getSession().setAttribute(ImageServlet.DEFAULT_JASPER_PRINT_SESSION_ATTRIBUTE, print); exporter.setParameter(JRExporterParameter.JASPER_PRINT, print); exporter.setParameter(JRExporterParameter.OUTPUT_WRITER, out); exporter.setParameter(JRHtmlExporterParameter.IMAGES_URI, "image?ver="+new Date().getTime() +"&image="); exporter.exportReport(); perfArrayListSample.clear(); So how to refresh the graph wihout flickering?

    Read the article

  • Aliasing Resources (WPF)

    - by Noldorin
    I am trying to alias a resource in XAML, as follows: <UserControl.Resources> <StaticResourceExtension x:Key="newName" ResourceKey="oldName"/> </UserControl.Resources> oldName simply refers to a resource of type Image, defined in App.xaml. As far as I understand, this is the correct way to do this, and should work fine. However, the XAML code gives me the superbly unhelpful error: "The application XAML file failed to load. Fix errors in the application XAML before opening other XAML files." This appears when I hover over the StaticResourceExtension line in the code (which has a squiggly underline). Several other errors are generated in the actual Error List, but seem to be fairly irrelevant and nonsenical (such messages as "The name 'InitializeComponent' does not exist in the current context"), as they all disappear when the line is removed. I'm completely stumped here. Why is WPF complaining about this code? Any ideas as to a resolution please? Note: I'm using WPF in .NET 3.5 SP1. Update 1: I should clairfy that I do receive compiler errors (the aforementioned messages in the Error List), so it's not just a designer problem. Update 2: Here's the relevant code in full... In App.xaml (under Application.Resource): <Image x:Key="bulletArrowUp" Source="Images/Icons/bullet_arrow_up.png" Stretch="None"/> <Image x:Key="bulletArrowDown" Source="Images/Icons/bullet_arrow_down.png" Stretch="None"/> And in MyUserControl.xaml (under UserControl.Resources): <StaticResourceExtension x:Key="columnHeaderSortUpImage" ResourceKey="bulletArrowUp"/> <StaticResourceExtension x:Key="columnHeaderSortDownImage" ResourceKey="bulletArrowDown"/> These are the lines that generate the errors, of course.

    Read the article

  • OpenGL - drawing 2D polygons shapes with texture

    - by plonkplonk
    I am trying to make a few effects in a C+GL game. So far I draw all my sprites as a quad, and it works. However, I am trying to make a large ring appear at times, with a texture following that ring, as it takes less memory than a quad with the ring texture inside. The type of ring I want to make is not a round-shaped GL mesh ring (the "tube" type) but a "paper" 2D ring. That way I can modify the "width" of the ring, getting more of the effect than a simple quad+ring texture. So far all my attempts have been...kind of ridiculous, as I don't understand GL's coordinates too well (and I can't really understand the available documentation...I am just a designer with no coder help or background. A n00b, basically). glBegin(GL_POLYGON); for(i = 0;i < 360; i += 10){ glTexCoord2f(0, 0); glVertex2f(Cos(i)*(H-10),Sin(i)H); glTexCoord2f(0, HP); glVertex2f(Sin(i)(H-10),Cos(i)*(H-10)); glTexCoord2f(WP, HP); glVertex2f(Cos(i)H,Sin(i)(H-10)); glTexCoord2f(WP, 0); glVertex2f(Sin(i)*H,Cos(i)*H); } glEnd(); This is my last attempt, and it seems to generate a "sunburst" from the right edge of the circle instead of a ring. It's an amusing effect but definitely not what I want. Other results included the circle looking exactly the same as the quad textured (aka drawing a sprite literally) or something that looked like a pop-art filter, by working on this train of thought. Seems like my logic here is entirely flawed, so, what would be the easiest way to obtain such a ring? No need to reply in code, just some guidance for a non-math-skilled user...

    Read the article

  • Cucumber Error: Socket Error for Test Environment Host in REST API

    - by tmo256
    I posted this to the Cucumber group with no replies, which makes me wonder if this is actually a cucumber issue or not. I'm pretty new to cucumber, and there are a number of things I really don't quite understand about how the cucumber environment is set up and executed within the test environment. I have a REST API rails app I'm testing with cucumber, using the RestClient gem to generate a post to controller create action. When I run the feature with a hard-coded URL pointing to a running localhost server (my local dev server environment; replacing tickets_url with "http:// localhost/tickets" in the snippet below), my cucumber steps execute as expected. However, when the resource URL resolves to the cucumber host I'm declaring, I get a socket error exception. getaddrinfo: nodename nor servname provided, or not known (SocketError) From the steps file: When /^POS Adapter sends JSON data to the Tickets resource$/ do ticket = { :ticket = { ... } } host! "test.host" puts tickets_url RestClient.post tickets_url, ticket.to_json, :content_type = :json, :accepts = :json end (the "puts" statement prints "http://test.host/tickets") Using the following gems: cucumber-0.6.1 webrat-0.6.0 rest-client-1.2.0 I should also say I have a similar set up in another rails app, using test.host as my host, and it seems to work fine. I'd appreciate any insight on what I might be missing in my configuration or what this could be related to.

    Read the article

  • Map large integer to a phrase

    - by Alexander Gladysh
    I have a large and "unique" integer (actually a SHA1 hash). I want (for no other reason than to have fun) to find an algorithm to convert that SHA1 hash to a (pseudo-)English phrase. The conversion should be reversible (i.e., knowing the algorithm, one must be able to convert the phrase back to SHA1 hash.) The possible usage of the generated phrase: the human readable version of Git commit ID, like a motto for a given program version (which is built from that commit). (As I said, this is "for fun". I don't claim that this is very practical — or be much more readable than the SHA1 itself.) A better algorithm would produce shorter, more natural-looking, more unique phrases. The phrase need not make sense. I would even settle for a whole paragraph of nonsense. (Though quality — englishness — of a paragraph should probably be better than for a mere phrase.) A variation: it is OK if I will be able to work only with a part of hash. Say, first six digits is OK. Possible approach: In the past I've attempted to build a probability table (of words), and generate phrases as Markov chains, seeding the generator (picking branches from probability tree), according to the bits I read from the SHA. This was not very successful, the resulting phrases were too long and ugly. I'm not sure if this was a bug, or the general flaw in the algorithm, since I had to abandon it early enough. Now I'm thinking about attempting to solve the problem once again. Any advice on how to approach this? Do you think Markov chain approach can work here? Something else?

    Read the article

  • Just introducing myself to TMPing, and came across a quirk

    - by Justen
    I was just trying to learn the syntax of the beginner things, and how it worked when I was making this short bit of code. The code below works in adding numbers 1 to 499, but if I add 1 to 500, the compiler bugs out giving me: fatal error C1001: An internal error has occurred in the compiler. And I was just wondering why that is. Is there some limit to how much code the compiler can generate or something and it just happened to be a nice round number of 500 for me? #include <iostream> using namespace std; template < int b > struct loop { enum { sum = loop< b - 1 >::sum + b }; }; template <> struct loop< 0 > { enum { sum = 0 }; }; int main() { cout << "Adding the numbers from 1 to 499 = " << loop< 499 >::sum << endl; return 0; }

    Read the article

  • How can I create my own form designer?

    - by Carson Myers
    I'm starting my first C# project, and I want to make a "form designer" (like the one in VS). The idea is, there will be a visual form designer with a limited toolbox, which will generate Python code (later more) to create the same form. Problem is, I have no idea how to even get started. First of all, I have the form designer in VS: how do I make a "form-within-a-form?" Next... I have no idea how complicated this is going to be. I suppose I could just make little boxes appear beside each control created on the form when it is clicked, for resizing, and make a textbox appear on it when double clicked or something, to change the text in it... Things like this. So another thing I would like to know is this: I do have programming experience in C and C++, I've done PHP for a number of years and am starting with Python as of recently. I've generated forms dynamically in VB6. Given this experience, am I in way over my head with this project?

    Read the article

  • What is the suggested approach to Syncing/Backing up/Restoring from SQL Server 2008 to SQL Server 20

    - by Eoin Campbell
    I only have SQL Server 2008 (Dev Edition) on my development machine I only have SQL Server 2005 available with my hosting company (and I don't have direct connection access to this database) I'm just wondering what the best approach is for: Getting the initlal DB Structure & Data into production. And keeping any structural changes/data changes in sync in future. As far as I can see... Replication - not an option cos I can't connect to the production DB. Restoring a backup - not an option because as far as I can see, you cannot export a DB from 2008 that is restorable in 2005 (even with the 2008 DB set in 2005 compatibility mode) and it wouldn't make sense to be restoring production over the top of my dev version anyway. Dump all the scripts from my 2008 Database, Revert my Dev to machine from 2008 - 2005, and recreate the database from the scripts, then just use backup & restore to get the initial DB into production, then run scripts through the web panel from that point onwards Dump all the scripts from my 2008 Database and generate the entire 2005 db from scripts in production. then run scripts through the web panel from that point onwards With the last 2 options, I'd probably need to script all the data inserts as well using some tool (which I presume exists on the web) Are there any other possibile solutions that I'm not considering.

    Read the article

  • parameterized doctype in xsl:stylesheet

    - by Flavius
    How could I inject a --stringparam (xsltproc) into the DOCTYPE of a XSL stylesheet? The --stringparam is specified from the command line. I have several books in docbook5 format I want to process with the same customization layer, each book having an unique identifier, here "demo", so I'm running something like xsltproc --stringparam course.name demo ... for each book. Obviously the parameter is not recognized as such, but as verbatim text, giving the error: warning: failed to load external entity "http://edu.yet-another-project.com/course/$(course.name)/entities.ent" Here it is how I've tried, which won't work: <?xml version='1.0'?> <!DOCTYPE stylesheet [ <!ENTITY % myent SYSTEM "http://edu.yet-another-project.com/course/$(course.name)/entities.ent"> %myent; ]> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <!-- the docbook template used --> <xsl:import href="http://docbook.org/ns/docbook/xhtml/chunk.xsl"/> <!-- processor parameters --> <xsl:param name="html.stylesheet">default.css</xsl:param> <xsl:param name="use.id.as.filename">1</xsl:param> <xsl:param name="chunker.output.encoding">UTF-8</xsl:param> <xsl:param name="chunker.output.indent">yes</xsl:param> <xsl:param name="navig.graphics">1</xsl:param> <xsl:param name="generate.revhistory.link">1</xsl:param> <xsl:param name="admon.graphics">1</xsl:param> <!-- here more stuff --> </xsl:stylesheet> Ideas?

    Read the article

  • Transaction Isolation Level of Serializable not working for me

    - by Shahriar
    I have a website which is used by all branches of a store and what it does is that it records customer purchases into a table called myTransactions.myTransactions table has a column named SerialNumber.For each purchase i create a record in the transactions table and assign a serial to it.The stored procedure that does this calls a UDF function to get a new serialNumber before inserting the record.Like below : Create Procedure mytransaction_Insert as begin insert into myTransactions(column1,column2,column3,...SerialNumber) values( Value1 ,Value2,Value3,...., getTransactionNSerialNumber()) end Create function getTransactionNSerialNumber as begin RETURN isnull(SELECT TOP (1) SerialNumber FROM myTransactions READUNCOMMITTED ORDER BY SerialNumber DESC),0) + 1 end The website is being used by so many users in different stores at the same time and it is creating many duplicate serialNumbers(same SerialNumbers).So i added a Sql transaction with ReadCommitted level to the transaction and i still got duplicate transaction numbers.I changed it to SERIALIZABLE in order to lock the resources and i not only got duplicate transaction numbers(!!HOW!!) but i also got sporadic deadlocks between the same stored procedure calls.This is what i tried : (With ommissions of try catch blocks and rollbacks) Create Procedure mytransaction_Insert as begin SET TRANSACTION ISOLATION LEVEL SERIALIZABLE BEGIN TRASNACTION ins insert into myTransactions(column1,column2,column3,...SerialNumber) values( Value1 ,Value2 , Value3, ...., getTransactionNSerialNumber()) COMMIT TRANSACTION ins SET TRANSACTION ISOLATION READCOMMITTED end I even copied the function that gets the serial number directly into the stored procedure instead of the UDF function call and still got duplicate serialNumbers.So,How can a stored procedure line create something Like the c# lock() {} block. By the way,i have to implement the transaction serial number using the same pattern and i can't change the serialNumber to any other identity field or whatever.And for some reasons i need to generate the serialNumber inside the databse and i can't move SerialNumber generation to application level. Thank you.

    Read the article

  • Android Home Screen Widget Failing with EditText

    - by Chris
    Whenever I add an EditText widget to the layout of my home screen widget (confusing how the term "widget" is being used twice in the Android lexicon :-/ ), I receive the "Problem Loading Widget" error box. Here is the layout I'm attempting; if you remove the EditText, it works... < RelativeLayout android:layout_width="fill_parent" android:layout_height="fill_parent" xmlns:android="http://schemas.android.com/apk/res/android" <Button android:id="@+id/button_generate" android:layout_width="54px" android:layout_height="54px" android:text="Generate" android:textSize="10sp" android:gravity="center" android:layout_alignParentTop="true" android:layout_toRightOf="@+id/edittext_key"> </Button> <TextView android:id="@+id/textview_hash" android:layout_width="75px" android:layout_height="45px" android:text="Password" android:textSize="11sp" android:gravity="left" android:layout_alignParentTop="true" android:layout_toLeftOf="@+id/edittext_key"> </TextView> <EditText android:id="@+id/edittext_data2" android:layout_width="200px" android:layout_height="50px" android:textSize="12sp" android:layout_marginTop="20px" android:layout_alignParentTop="true" android:layout_centerHorizontal="true"> </EditText> </RelativeLayout> Now, the Google Search home screen widget has an EditText, so it's obviously legal to implement. Any thoughts on why this is not working?

    Read the article

  • Converting python collaborative filtering code to use Map Reduce

    - by Neil Kodner
    Using Python, I'm computing cosine similarity across items. given event data that represents a purchase (user,item), I have a list of all items 'bought' by my users. Given this input data (user,item) X,1 X,2 Y,1 Y,2 Z,2 Z,3 I build a python dictionary {1: ['X','Y'], 2 : ['X','Y','Z'], 3 : ['Z']} From that dictionary, I generate a bought/not bought matrix, also another dictionary(bnb). {1 : [1,1,0], 2 : [1,1,1], 3 : [0,0,1]} From there, I'm computing similarity between (1,2) by calculating cosine between (1,1,0) and (1,1,1), yielding 0.816496 I'm doing this by: items=[1,2,3] for item in items: for sub in items: if sub >= item: #as to not calculate similarity on the inverse sim = coSim( bnb[item], bnb[sub] ) I think the brute force approach is killing me and it only runs slower as the data gets larger. Using my trusty laptop, this calculation runs for hours when dealing with 8500 users and 3500 items. I'm trying to compute similarity for all items in my dict and it's taking longer than I'd like it to. I think this is a good candidate for MapReduce but I'm having trouble 'thinking' in terms of key/value pairs. Alternatively, is the issue with my approach and not necessarily a candidate for Map Reduce?

    Read the article

  • Design to distribute work when generating task oriented input for legacy dos application?

    - by TheDeeno
    I'm attempting to automate a really old dos application. I've decided the best way to do this is via input redirection. The legacy app (menu driven) has many tasks within tasks with branching logic. In order to easily understand and reuse the input for these tasks, I'd like to break them into bit size pieces. Since I'll need to start a fresh app on each run, repeating a context to consume a bit might be messy. I'd like to create an object model that: allows me to concentrate on the task at hand allows me to reuse common tasks from different start points prevents me from calling a task from the wrong start point To be more explicit, given I have the following task hierarchy: START A A1 A1a A1b A2 A2a B B1 B1a I'd like an object model that lets me generate an input file for task "A1b" buy using building blocks like: START -> do_A, do_A1, do_A1b but prevents me from: START -> do_A1 // because I'm assuming a different call chain from above This will help me write "do_A1b" because I can always assume the same starting context and will simplify writing "do_A1a" because it has THE SAME starting context. What patterns will help me out here? I'm using ruby at the moment so if dynamic language features can help, I'm game.

    Read the article

  • centos: linking lib that doesn't have a .pc file for pkg-config

    - by Paulie
    I'm trying to compile svg2pdf on centos. I think I've managed to get the required dependencies installed using yum: sudo yum install librsvg2 sudo yum install cairo The Makefile contains: MYCFLAGS=`pkg-config --cflags librsvg-2.0 cairo-pdf` -Wall -Wpointer-arith -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations - Wnested-externs -fno-strict-aliasing MYLDFLAGS=`pkg-config --libs librsvg-2.0 cairo-pdf` After typing 'make', the first couple of lines of output are: cc `pkg-config --cflags librsvg-2.0 cairo-pdf` -Wall -Wpointer-arith -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wnested-externs -fno-strict-aliasing `pkg-config --libs librsvg-2.0 cairo-pdf` svg2pdf.c -o svg2pdf Package librsvg-2.0 was not found in the pkg-config search path. Perhaps you should add the directory containing `librsvg-2.0.pc' to the PKG_CONFIG_PATH environment variable No package 'librsvg-2.0' found There is no librsvg-2.0.pc on this system (but there is when I installed this using macports on my macbookpro). How should I get this package linked? Should I change the Makefile, and if so, to what? Should I generate the .pc files, and if so, how? Or is there another way to resolve this? Or, has anyone been successful at installing svg2pdf on centos, and, if so, how did you manage it? I don't have much experience on linux, so I might be missing something obvious.

    Read the article

  • why assign null value or another default value firstly?

    - by Phsika
    i try to generate some codes. i face to face delegates. Everythings is ok.(Look below) But appearing a warning: you shold assing value why? but second code below is ok. namespace Delegates { class Program { static void Main(string[] args) { HesapMak hesapla = new HesapMak(); hesapla.Calculator = new HesapMak.Hesap(hesapla.Sum); double sonuc = hesapla.Calculator(34, 2); Console.WriteLine("Toplama Sonucu:{0}",sonuc.ToString()); Console.ReadKey(); } } class HesapMak { public double Sum(double s1, double s2) { return s1 + s2; } public double Cikarma(double s1, double s2) { return s1 - s2; } public double Multiply(double s1, double s2) { return s1 * s2; } public double Divide(double s1, double s2) { return s1 / s2; } public delegate double Hesap(double s1, double s2); public Hesap Calculator; ----&#60; they want me assingn value } } namespace Delegates { class Program { static void Main(string[] args) { HesapMak hesapla = new HesapMak(); hesapla.Calculator = new HesapMak.Hesap(hesapla.Sum); double sonuc = hesapla.Calculator(34, 2); Console.WriteLine("Toplama Sonucu:{0}",sonuc.ToString()); Console.ReadKey(); } } class HesapMak { public double Sum(double s1, double s2) { return s1 + s2; } public double Cikarma(double s1, double s2) { return s1 - s2; } public double Multiply(double s1, double s2) { return s1 * s2; } public double Divide(double s1, double s2) { return s1 / s2; } public delegate double Hesap(double s1, double s2); public Hesap Calculator=null; } }

    Read the article

  • Covariance and Contravariance type inference in C# 4.0

    - by devoured elysium
    When we define our interfaces in C# 4.0, we are allowed to mark each of the generic parameters as in or out. If we try to set a generic parameter as out and that'd lead to a problem, the compiler raises an error, not allowing us to do that. Question: If the compiler has ways of inferring what are valid uses for both covariance (out) and contravariance(in), why do we have to mark interfaces as such? Wouldn't it be enough to just let us define the interfaces as we always did, and when we tried to use them in our client code, raise an error if we tried to use them in an un-safe way? Example: interface MyInterface<out T> { T abracadabra(); } //works OK interface MyInterface2<in T> { T abracadabra(); } //compiler raises an error. //This makes me think that the compiler is cappable //of understanding what situations might generate //run-time problems and then prohibits them. Also, isn't it what Java does in the same situation? From what I recall, you just do something like IMyInterface<? extends whatever> myInterface; //covariance IMyInterface<? super whatever> myInterface2; //contravariance Or am I mixing things? Thanks

    Read the article

  • axis2 maven example

    - by larrycai
    I try to use axis2 (1.5.1) version to generate java codes from wsdl files, but I can't figure out what is the correct pom.xml <build> <plugins> <plugin> <groupId>org.apache.axis2</groupId> <artifactId>axis2-wsdl2code-maven-plugin</artifactId> <version>1.5.1</version> <executions> <execution> <goals> <goal>wsdl2code</goal> </goals> <configuration> <wsdlFile>src/main/resources/wsdl/stockquote.wsdl</wsdlFile> <databindingName>xmlbeans</databindingName> <packageName>a.bc</packageName> </configuration> </execution> </executions> </plugin> </plugins> </build> <dependencies> <dependency> <groupId>org.apache.axis2</groupId> <artifactId>axis2</artifactId> <version>1.5.1</version> </dependency> </dependencies> when I type mvn compile, it complains the Retrieving document at 'src/main/resources/wsdl/stockquote.wsdl'. java.lang.ClassNotFoundException: org.apache.xml.serializer.TreeWalker And if i try to find the TreeWalker, it is a mess to find a suitable jar files. can u someone give me a hints ? or give me correct pom.xml

    Read the article

  • Any good tools or tips for fuzz testing Windows forms applications?

    - by Ogre Psalm33
    I'm maintaining a ~300K LOC C# legacy thick-client application with a Windows.Forms interface. The app is full of little bugs and quirks. For example, I recently discovered a bug where if a users edits and tabs (not clicks) through cells on a DataViewGrid, and leaves the a certain cell selected, the app gets an "Object reference not set to an instance of an object" exception. I discover (or get a bug report of) something new like this about every week or two. I've had enough, and was thinking of trying some sort of fuzz testing on the application to try to ferret out undiscovered issues. If I roll-my-own fuzz testing, I'd assume I at least need to be able to generate test harnesses that run pieces of my app (main window, FormX, FormY, FormZ, ...) independently and try to inject events into them. I was trying to look for tools suited for this, but so far have come up with nothing for Win Forms. (There seems to be no shortage of fuzz testing tools for web apps, however). Any helpful ideas?

    Read the article

  • How to invalidate the OutputCache in a webfarm?

    - by Pure.Krome
    Hi folks, i've got a website that uses OutputCache attribute to cache pages. Works great. Now, I'm in the middle of R&D'ing scaling up this site to be in a web farm. Along with the usual suspects for webfarm pain ... I've noticed (pretty quickly/obviously) that the OutputCache from Server_A doesn't invalidate the OutputCache from Server_B .. if a try and invalidate a single server's OutputCache. This makes total sense - how can S_A 'tell' S_B to invalidate when they are physically 2 seperate machines, etc? So - what are our options? Velocity? I understand this will move the caching to a different layer .. which means that the final result (output) will always be required to be determined .. as opposed to the OutputCache whic remembers the final output content (yes, varby gives different versions, etc.. which is totally fine). So even though the poco or business objects are all sync'd, there's still that last rendering effort required (even if it's tiny .. compared to the effort to generate/sync business objects). So yeah .. not sure of the options here and what other people do?

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >