Search Results

Search found 26297 results on 1052 pages for 'unit test'.

Page 686/1052 | < Previous Page | 682 683 684 685 686 687 688 689 690 691 692 693  | Next Page >

  • Documentation and Build system for Mono/C#

    - by dcolish
    I'm starting out on a new project and a team member has decided to use C# as the implementation language. I don't have a lot of experience in C#, but a brief reading shows that it's very capable of being a complete cross-platform vm. Beyond the language, I've been having trouble selecting tools and workflows for managing the code as the project grows. It should be fairly small (<10K lines) but I would like to have the ability to generate documentation as the project grows, manage any external dependencies that we decide to use, and automate builds and testing. I am wondering what tools are commonly used or considered best practices for this language. I am mainly concerned with how would a build system potentially work on *nix as well as windows? Are there C# specific tools or is Make more common? In addition, I'd like to use a dvcs, but it doesn't look like Visual Studio and MonoDevelop support the same ones. What's the common vcs of choice for C#? For testing sort of Unit testing is available for C#/Mono? Finally, I know that there are good doc generators, but with the question of the build system, I would really like to have that just be a single step in the build similar to how testing is a step. Normally I'd automate with Hudson, but I am wondering if there is something more specific to the platform. Overall, I'd love to see a solution that provides a decent workflow on both windows and *nix without a heavy admin burden. I am pretty sure this is the holy grail of project management, so anything that puts me on that path is awesome.

    Read the article

  • Cannot Display Chinese Character in my PHP code

    - by Jun1st
    I want to display my Twitter Info in my blog. So I write some code to get it. the issue I got is that Chinese characters displayed as unknown code. Here is the test code. Could anyone take a look and help? Thanks <html> <title>Twitter Test</title> <body> <?php function mystique_objectToArray($object){ if(!is_object($object) && !is_array($object)) return $object; if(is_object($object)) $object = get_object_vars($object); return array_map('mystique_objectToArray', $object); } define( 'ABSPATH', dirname(dirname(__FILE__)) . '/' ); require_once('/home/jun1st/jun1stfeng.com/wp-includes/class-snoopy.php'); $snoopy = new Snoopy; $response = @$snoopy->fetch("http://twitter.com/users/show/jun1st.json"); if ($response) $userdata = json_decode($snoopy->results, true); else $error = true; $response = @$snoopy->fetch("http://twitter.com/statuses/user_timeline/jun1st.json"); if ($response) $tweets = json_decode($snoopy->results, true); else $error = true; if(!$error): // for php < 5 (included JSON returns object) $userdata = mystique_objectToArray($userdata); $tweets = mystique_objectToArray($tweets); $twitdata = array(); $twitdata['user']['profile_image_url'] = $userdata['profile_image_url']; $twitdata['user']['name'] = $userdata['name']; $twitdata['user']['screen_name'] = $userdata['screen_name']; $twitdata['user']['followers_count'] = $userdata['followers_count']; $i = 0; foreach($tweets as $tweet): $twitdata['tweets'][$i]['text'] = $tweet['text']; $twitdata['tweets'][$i]['created_at'] = $tweet['created_at']; $twitdata['tweets'][$i]['id'] = $tweet['id']; $i++; endforeach; endif; // only show if the twitter data from the database is newer than 6 hours if(is_array($twitdata['tweets'])): ?> <div class="clear-block"> <div class="avatar"><img src="<?php echo $twitdata['user']['profile_image_url']; ?>" alt="<?php echo $twitdata['user']['name']; ?>" /></div> <div class="info"><a href="http://www.twitter.com/jun1st"><?php echo $twitdata['user']['name']; ?> </a><br /><span class="followers"> <?php printf(__("%s followers","mystique"),$twitdata['user']['followers_count']); ?></span></div> </div> <ul> <?php $i = 0; foreach($twitdata['tweets'] as $tweet): $pattern = '/\@(\w+)/'; $replace = '<a rel="nofollow" href="http://twitter.com/$1">@$1</a>'; $tweet['text'] = preg_replace($pattern, $replace , $tweet['text']); $tweet['text'] = make_clickable($tweet['text']); // remove +XXXX $tweettime = substr_replace($tweet['created_at'],'',strpos($tweet['created_at'],"+"),5); $link = "http://twitter.com/".$twitdata['user']['screen_name']."/statuses/".$tweet['id']; echo '<li><span class="entry">' . $tweet['text'] .'<a class="date" href="'.$link.'" rel="nofollow">'.$tweettime.'</a></span></li>'; $i++; if ($i == $twitcount) break; endforeach; ?> </ul> <? endif?> ?> </body> </html>

    Read the article

  • Domain Validation in a CQRS architecture

    - by Jupaol
    Basically I want to know if there is a better way to validate my domain entities. This is how I am planning to do it but I would like your opinion The first approach I considered was: class Customer : EntityBase<Customer> { public void ChangeEmail(string email) { if(string.IsNullOrWhitespace(email)) throw new DomainException(“...”); if(!email.IsEmail()) throw new DomainException(); if(email.Contains(“@mailinator.com”)) throw new DomainException(); } } I actually do not like this validation because even when I am encapsulating the validation logic in the correct entity, this is violating the Open/Close principle (Open for extension but Close for modification) and I have found that violating this principle, code maintenance becomes a real pain when the application grows up in complexity. Why? Because domain rules change more often than we would like to admit, and if the rules are hidden and embedded in an entity like this, they are hard to test, hard to read, hard to maintain but the real reason why I do not like this approach is: if the validation rules change, I have to come and edit my domain entity. This has been a really simple example but in RL the validation could be more complex So following the philosophy of Udi Dahan, making roles explicit, and the recommendation from Eric Evans in the blue book, the next try was to implement the specification pattern, something like this class EmailDomainIsAllowedSpecification : IDomainSpecification<Customer> { private INotAllowedEmailDomainsResolver invalidEmailDomainsResolver; public bool IsSatisfiedBy(Customer customer) { return !this.invalidEmailDomainsResolver.GetInvalidEmailDomains().Contains(customer.Email); } } But then I realize that in order to follow this approach I had to mutate my entities first in order to pass the value being valdiated, in this case the email, but mutating them would cause my domain events being fired which I wouldn’t like to happen until the new email is valid So after considering these approaches, I came out with this one, since I am going to implement a CQRS architecture: class EmailDomainIsAllowedValidator : IDomainInvariantValidator<Customer, ChangeEmailCommand> { public void IsValid(Customer entity, ChangeEmailCommand command) { if(!command.Email.HasValidDomain()) throw new DomainException(“...”); } } Well that’s the main idea, the entity is passed to the validator in case we need some value from the entity to perform the validation, the command contains the data coming from the user and since the validators are considered injectable objects they could have external dependencies injected if the validation requires it. Now the dilemma, I am happy with a design like this because my validation is encapsulated in individual objects which brings many advantages: easy unit test, easy to maintain, domain invariants are explicitly expressed using the Ubiquitous Language, easy to extend, validation logic is centralized and validators can be used together to enforce complex domain rules. And even when I know I am placing the validation of my entities outside of them (You could argue a code smell - Anemic Domain) but I think the trade-off is acceptable But there is one thing that I have not figured out how to implement it in a clean way. How should I use this components... Since they will be injected, they won’t fit naturally inside my domain entities, so basically I see two options: Pass the validators to each method of my entity Validate my objects externally (from the command handler) I am not happy with the option 1 so I would explain how I would do it with the option 2 class ChangeEmailCommandHandler : ICommandHandler<ChangeEmailCommand> { public void Execute(ChangeEmailCommand command) { private IEnumerable<IDomainInvariantValidator> validators; // here I would get the validators required for this command injected, and in here I would validate them, something like this using (var t = this.unitOfWork.BeginTransaction()) { var customer = this.unitOfWork.Get<Customer>(command.CustomerId); this.validators.ForEach(x =. x.IsValid(customer, command)); // here I know the command is valid // the call to ChangeEmail will fire domain events as needed customer.ChangeEmail(command.Email); t.Commit(); } } } Well this is it. Can you give me your thoughts about this or share your experiences with Domain entities validation EDIT I think it is not clear from my question, but the real problem is: Hiding the domain rules has serious implications in the future maintainability of the application, and also domain rules change often during the life-cycle of the app. Hence implementing them with this in mind would let us extend them easily. Now imagine in the future a rules engine is implemented, if the rules are encapsulated outside of the domain entities, this change would be easier to implement

    Read the article

  • Temporary "Backup" of SharePoint Content During Feature and Solution Deployment

    - by ccomet
    I need to decide on a method for storing a subset of the content in a SharePoint site, so that when I delete and recreate certain lists as part of a feature activation, I can re-insert all of this content back where it should belong. I have an idea myself, but I don't know if it's the only method and more importantly, the right method. My client has me creating a SharePoint system for them to communicate with their clients. The business process has maybe 5 stages in it (maybe it's more, I don't even know because they don't tell me everything), and the current system I've written over the past months is maybe 2 stages through. This meets our deadline of completing those systems by Monday next week... but at that point my client is planning on making the site live from that point. In effect, their work with their clients will be running parallel with my work for them. As I complete my own work on a separate test server, I'll push each following stage of the process onto the live server. Scheduled downtimes during non-business times (like a weekend) will be available for me to perform these pushes. Keeping pace so that my development is faster than the actual business process is my own problem and off-topic... so let's get back to the problem I stated at the start of this post. In this system, we have sets of features which will create lists for their associated content types and field types when activated, and delete these lists when the feature is deactivated. Most updates don't need to deactivate and reactivate these features, such as workflow changes, custom actions, custom forms, and similar ilk. But there are some parts which do require this. On my test server, it's okay for me to obliterate lists, but once the site is live and there's real correspondence data, it's absolutely unacceptable to do this. So when I need to implement a new change in functionality, I need to be able to store the currently present data in several lists, deactivate the feature, reactivate the feature, and restore all of this data. Perhaps I have hoist myself by my own petard with the feature system I implemented. Unfortunately, the necessity to later on make several of these "project sites" meant I had to do a lot of my code with the concept of "Can be deployed repeatedly" in mind. My current plan is to run through lists and libraries which will be affected by the particular feature that is to be reset. Files and all of their versions will be saved in a directory on the server. Then, a set of text files will be used to store all of the important field values for the items. This includes a lot of cross-list reference lookups that will need to be maintained, but that's simple enough. Then, I deactivate the feature, deploy the new solution, and reactivate the feature. We upload all of the files in the order specified by their versions and update them with the stored fields for those versions, so that we retain the version structure. As each one is first uploaded, the new ID is picked out, and all relevant lookups in the rest of the files are updated (in some manner that I make sure I don't re-update it later with an incorrect value, of course). After that, we run through all the rest of the items in the order most conducive to keeping the relational data correct. This roughly summarizes what my current plan is. To my advantage, there are no long running workflows in the system that will be affected by this, so there's nothing I will have to worry about making sure nothing is "still running" when I do this stuff. I don't really know all the cons of this approach... I can imagine they're quite hefty. But I'm unsure what other choices I even have, and my searches haven't turned up anything. Is there anyone who can think of a better idea? Or will anyone just tell me that I really have no other choice? Thanks in advance!

    Read the article

  • Database Design Question regaurding duplicate information.

    - by galford13x
    I have a database that contains a history of product sales. For example the following table CREATE TABLE SalesHistoryTable ( OrderID, // Order Number Unique to all orders ProductID, // Product ID can be used as a Key to look up product info in another table Price, // Price of the product per unit at the time of the order Quantity, // quantity of the product for the order Total, // total cost of the order for the product. (Price * Quantity) Date, // Date of the order StoreID, // The store that created the Order PRIMARY KEY(OrderID)); The table will eventually have millions of transactions. From this, profiles can be created for products in different geographical regions (based on the StoreID). Creating these profiles can be very time consuming as a database query. For example. SELECT ProductID, StoreID, SUM(Total) AS Total, SUM(Quantity) QTY, SUM(Total)/SUM(Quantity) AS AvgPrice FROM SalesHistoryTable GROUP BY ProductID, StoreID; The above query could be used to get the Information based on products for any particular store. You could then determine which store has sold the most, has made the most money, and on average sells for the most/least. This would be very costly to use as a normal query run anytime. What are some design descisions in order to allow these types of queries to run faster assuming storage size isn’t an issue. For example, I could create another Table with duplicate information. Store ID (Key), Product ID, TotalCost, QTY, AvgPrice And provide a trigger so that when a new order is received, the entry for that store is updated in a new table. The cost for the update is almost nothing. What should be considered when given the above scenario?

    Read the article

  • DDD: Persisting aggregates

    - by Mosh
    Hello, Let's consider the typical Order and OrderItem example. Assuming that OrderItem is part of the Order Aggregate, it an only be added via Order. So, to add a new OrderItem to an Order, we have to load the entire Aggregate via Repository, add a new item to the Order object and persist the entire Aggregate again. This seems to have a lot of overhead. What if our Order has 10 OrderItems? This way, just to add a new OrderItem, not only do we have to read 10 OrderItems, but we should also re-insert all these 10 OrderItems again. (This is the approach that Jimmy Nillson has taken in his DDD book. Everytime he wants to persists an Aggregate, he clears all the child objects, and then re-inserts them again. This can cause other issues as the ID of the children are changed everytime because of the IDENTITY column in database.) I know some people may suggest to apply Unit of Work pattern at the Aggregate Root so it keeps track of what has been changed and only commit those changes. But this violates Persistence Ignorance (PI) principle because persistence logic is leaking into the Domain Model. Has anyone thought about this before? Mosh

    Read the article

  • JPA - Performance with using multiple entity manager

    - by Nguyen Tuan Linh
    My situation is: The code is not mine I have two kinds of database: one is Dad, one is Son. In Dad, I have a table to store JNDI name. I will look up Dad using JNDI, create entity manager, and retrieve this table. From these retrieved JNDI names, I will create multiple entity managers using multiple Son databases. The problem is: Son have thousands of entities. It takes each Son database around 10 minutes to load all entities. If there is 4 Son databases, it will be 40 minutes. My question: Is there any way to load all entities and use them for all entity manager? Please look at the code below For each Son JNDI: Map<String, String> puSonProperties = new HashMap<String, String>(); puSonProperties.put("javax.persistence.jtaDataSource", sonJndi); EntityManagerFactory emf = Persistence.createEntityManagerFactory("PUSon", puSonProperties); PUSon - All of them use the same persistence unit log.info("Verify entity manager for son: {0} - {1}", sonCode, emSon.find(Son_configuration.class, 0) != null ? "ok" : "failed!"); This is the actual code where the loading of all entities begins. 10 mins.

    Read the article

  • HttpParsing for hypertext

    - by Nani
    I am in process of getting all hierarchical links from a given link and validating them; This is the code I wrote. But I am not feeling it as efficient. Reasons are: 1.For the non unique links which open same page, code is getting sub-links again and again 2.Is the code getting all links? 3.Is it making valid URLs from the sub-links it derived? 4.May be some other reasons about which I have no idea. Please suggest me how to make this piece of code efficient . Thank you. class Program { public static ArrayList sublink = new ArrayList(); public static ArrayList subtitle = new ArrayList(); public static int ini = 0, len_o, len_n, counter = 0; static void Main(string[] args) { // Address of URL string URL = "http://www.techonthenet.com/"; sublink.Add(URL); l: len_o = sublink.Count; len_o); Console.WriteLine("-------------Level:" + counter++); for (int i = ini; i < len_o; i++) test(sublink[i].ToString()); len_n = sublink.Count; if (len_o < len_n) { ini = len_o; goto l; } Console.ReadKey(); } //method to get the sub-links public static void test(string URL) { try { // Get HTML data WebClient client = new WebClient(); Stream data = client.OpenRead(URL); StreamReader reader = new StreamReader(data); string str = "", htmldata = "", temp; int n1, n2; str = reader.ReadLine(); while (str != null) { htmldata += str; str = reader.ReadLine(); } data.Close(); for (int i = 0; i < htmldata.Length - 5; i++) { if (htmldata.Substring(i, 5) == "href=") { n1 = htmldata.Substring(i + 6, htmldata.Length - (i + 6)).IndexOf("\""); temp = htmldata.Substring(i + 6, n1); if (temp.Length > 4 && temp.Substring(0, 4) != "http") { if(temp.Substring(0,1)!="/") temp=URL.Substring(0,URL.IndexOf(".com/")+5)+temp; else temp = URL.Substring(0, URL.IndexOf(".com/") + 5) + temp.Remove(0,1); } if (temp.Length < 4) temp = URL.Substring(0, URL.IndexOf(".com/") + 5) + temp; sublink.Add(temp); n2 = htmldata.Substring(i + n1 + 1, htmldata.Length - (i + n1 + 1)).IndexOf("<"); subtitle.Add(htmldata.Substring(i + 6 + n1 + 2, n2 - 7)); i += temp.Length + htmldata.Substring(i + 6 + n1 + 2, n2 - 7).Length; } } for (int i = len_n; i < sublink.Count; i++) Console.WriteLine(i + "--> " + sublink[i]); } catch (WebException exp) { Console.WriteLine("URL Could not be Resolved" + URL); Console.WriteLine(exp.Message, "Exception"); } } }

    Read the article

  • Parse XML function names and call within whole assembly

    - by Matt Clarkson
    Hello all, I have written an application that unit tests our hardware via a internet browser. I have command classes in the assembly that are a wrapper around individual web browser actions such as ticking a checkbox, selecting from a dropdown box as such: BasicConfigurationCommands EventConfigurationCommands StabilizationCommands and a set of test classes, that use the command classes to perform scripted tests: ConfigurationTests StabilizationTests These are then invoked via the GUI to run prescripted tests by our QA team. However, as the firmware is changed quite quickly between the releases it would be great if a developer could write an XML file that could invoke either the tests or the commands: <?xml version="1.0" encoding="UTF-8" ?> <testsuite> <StabilizationTests> <StressTest repetition="10" /> </StabilizationTests> <BasicConfigurationCommands> <SelectConfig number="2" /> <ChangeConfigProperties name="Weeeeee" timeOut="15000" delay="1000"/> <ApplyConfig /> </BasicConfigurationCommands> </testsuite> I have been looking at the System.Reflection class and have seen examples using GetMethod and then Invoke. This requires me to create the class object at compile time and I would like to do all of this at runtime. I would need to scan the whole assembly for the class name and then scan for the method within the class. This seems a large solution, so any information pointing me (and future readers of this post) towards an answer would be great! Thanks for reading, Matt

    Read the article

  • Testing a wide variety of computers with a small company

    - by Tom the Junglist
    Hello everyone, I work for a small dotcom which will soon be launching a reasonably-complicated Windows program. We have uncovered a number of "WTF?" type scenarios that have turned up as the program has been passed around to the various not-technical-types that we've been unable to replicate. One of the biggest problems we're facing is that of testing: there are a total of three programmers -- only one working on this particular project, me -- no testers, and a handful of assorted other staff (sales, etc). We are also geographically isolated. The "testing lab" consists of a handful of VMWare and VPC images running sort-of fresh installs of Windows XP and Vista, which runs on my personal computer. The non-technical types try to be helpful when problems arise, we have trained them on how to most effectively report problems, and the software itself sports a wide array of diagnostic features, but since they aren't computer nerds like us their reporting is only so useful, and arranging remote control sessions to dig into the guts of their computers is time-consuming. I am looking for resources that allow us to amplify our testing abilities without having to put together an actual lab and hire beta testers. My boss mentioned rental VPS services and asked me to look in to them, however they are still largely very much self-service and I was wondering if there were any better ways. How have you, or any other companies in a similar situation handled this sort of thing? EDIT: According to the lingo, our goal here is to expand our systems testing capacity via an elastic computing platform such as Amazon EC2. At this point I am not sure suggestions of beefing up our unit/integration testing are going to help very much as we are consistently hitting walls at the systems testing phase. Has anyone attempted to do this kind of software testing on a cloud-type service like EC2? Tom

    Read the article

  • Problem with memset after an instance of a user defined class is created and a file is opened

    - by Liberalkid
    I'm having a weird problem with memset, that was something to do with a class I'm creating before it and a file I'm opening in the constructor. The class I'm working with normally reads in an array and transforms it into another array, but that's not important. The class I'm working with is: #include <vector> #include <algorithm> using namespace std; class PreProcess { public: PreProcess(char* fileName,char* outFileName); void SortedOrder(); private: vector< vector<double > > matrix; void SortRow(vector<double> &row); char* newFileName; vector< pair<double,int> > rowSorted; }; The other functions aren't important, because I've stopped calling them and the problem persists. Essentially I've narrowed it down to my constructor: PreProcess::PreProcess(char* fileName,char* outFileName):newFileName(outFileName){ ifstream input(fileName); input.close(); //this statement is inconsequential } I also read in the file in my constructor, but I've found that the problem persists if I don't read in the matrix and just open the file. Essentially I've narrowed it down to if I comment out those two lines the memset works properly, otherwise it doesn't. Now to the context of the problem I'm having with it: I wrote my own simple wrapper class for matrices. It doesn't have much functionality, I just need 2D arrays in the next part of my project and having a class handle everything makes more sense to me. The header file: #include <iostream> using namespace std; class Matrix{ public: Matrix(int r,int c); int &operator()(int i,int j) {//I know I should check my bounds here return matrix[i*columns+j]; } ~Matrix(); const void Display(); private: int *matrix; const int rows; const int columns; }; Driver: #include "Matrix.h" #include <string> using namespace std; Matrix::Matrix(int r,int c):rows(r),columns(c) { matrix=new int[rows*columns]; memset(matrix,0,sizeof(matrix)); } const void Matrix::Display(){ for(int i=0;i<rows;i++){ for(int j=0;j<columns;j++) cout << (*this)(i,j) << " "; cout << endl; } } Matrix::~Matrix() { delete matrix; } My main program runs: PreProcess test1(argv[1],argv[2]); //test1.SortedOrder(); Matrix test(10,10); test.Display(); And when I run this with the input line uncommented I get: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1371727776 32698 -1 0 0 0 0 0 6332656 0 -1 -1 0 0 6332672 0 0 0 0 0 0 0 0 0 0 0 0 0 -1371732704 32698 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 I really don't have a clue what's going on in memory to cause this, on a side note if I replace memset with: for(int i=0;i<rows*columns;i++) *(matrix+i) &= 0x0; Then it works perfectly, it also works if I don't open the file. If it helps I'm running GCC 64-bit version 4.2.4 on Ubuntu.I assume there's some functionality of memset that I'm not properly understanding.

    Read the article

  • if (i == 2) or if (2 == i) ?

    - by Maroloccio
    I usually use if (i == 2) in preference to if (2 == i) On occasion, I would switch things around when writing xUnit-type tests from scratch so as to follow the assertion convention of "expected" preceding "actual". When adding to existing unit tests, I always follow the style I find. No matter what, I always try to keep things consistent. Today I checked out some code with a lot of "if (2 == i)" and started wondering: which style is more "popular" nowadays? is popularity language-dependent? The latter probably because I am aware of why the "if (2 == i)" became common in the first place (C heritage) and because I see some languages go as far as disallowing assignments within conditions (e.g. Python). I thought about downloading some sources: apt-get source linux-source eclipse openoffice.org expanding them and performing a quick grep: grep --color --include=*.java --include=*.c -ERI \ "if[[:space:]]*\([[:space:]]*[[:digit:]]+[[:space:]]==" . or creating a quick "poll": http://goo.gl/mod/ciMF after a bit of searching and asking around, I am still not sure. So I am asking you: which way to go?

    Read the article

  • Searching a document for multiple terms in VBA?

    - by Tony
    I'm trying to create a macro to be used in Microsoft Word 2007 that will search a document for multiple keywords (string variables) located in an external Excel file (the reason for having it in an external file is that the terms will often be changed and updated). I've figured out how to search a document paragraph by paragraph for a single term and color every instance of that term, and I assumed that the proper method would be to use a dynamic array as the search term variable. The question is: how do I get the macro to create an array containing all the terms from an external file and search each paragraph for each and every term? This is what I have so far: Sub SearchForMultipleTerms() ' Dim SearchTerm As String 'declare search term SearchTerm = InputBox("What are you looking for?") 'prompt for term. this should be removed, as the terms should come from an external XLS file rather than user input. Selection.Find.ClearFormatting Selection.Find.Replacement.ClearFormatti… With Selection.Find .Text = SearchTerm 'find the term! .Forward = True .Wrap = wdFindStop .Format = False .MatchCase = False .MatchWholeWord = False .MatchWildcards = False .MatchSoundsLike = False .MatchAllWordForms = False End With While Selection.Find.Execute Selection.GoTo What:=wdGoToBookmark, Name:="\Para" 'select paragraph Selection.Font.Color = wdColorGray40 'color paragraph Selection.MoveDown Unit:=wdParagraph, Count:=1 'move to next paragraph Wend End Sub Thanks for looking!

    Read the article

  • Open XML SDK 2.0 - Split table to new power point slide when content flows off current slide

    - by amurra
    I have a bunch of data that I need to export from a website to a power point presentation and have been using Open XML SDK 2.0 to perform this task. I have a power point presentation that I am putting through Open XML SDK 2.0 Productivity Tool to generate the template code that I can use to recreate the export. On one of those slides I have a table and the requirement is to add data to that table and break that table across multiple slides if the table exceeds the bottom of the slide. The approach I have taken is to determine the height of the table and if it exceeds the height of the slide, move that new content into the next slide. I have read Bryan and Jones blog on adding repeating data to a power point slide, but my scenario is a little different. They use the following code: A.Table tbl = current.Slide.Descendants<A.Table>().First(); A.TableRow tr = new A.TableRow(); tr.Height = heightInEmu; tr.Append(CreateDrawingCell(imageRel + imageRelId)); tr.Append(CreateTextCell(category)); tr.Append(CreateTextCell(subcategory)); tr.Append(CreateTextCell(model)); tr.Append(CreateTextCell(price.ToString())); tbl.Append(tr); imageRelId++; This won't work for me since they know what height to set the table row to since it will be the height of the image, but when adding in different amounts of text I do not know the height ahead of time so I just set tr.Heightto a default value. Here is my attempt at figuring at the table height: A.Table tbl = tableSlide.Slide.Descendants<A.Table>().First(); A.TableRow tr = new A.TableRow(); tr.Height = 370840L; tr.Append(PowerPointUtilities.CreateTextCell("This"); tr.Append(PowerPointUtilities.CreateTextCell("is")); tr.Append(PowerPointUtilities.CreateTextCell("a")); tr.Append(PowerPointUtilities.CreateTextCell("test")); tr.Append(PowerPointUtilities.CreateTextCell("Test")); tbl.Append(tr); tableSlide.Slide.Save(); long tableHeight = PowerPointUtilities.TableHeight(tbl); Here are the helper methods: public static A.TableCell CreateTextCell(string text) { A.TableCell tableCell = new A.TableCell( new A.TextBody(new A.BodyProperties(), new A.Paragraph(new A.Run(new A.Text(text)))), new A.TableCellProperties()); return tableCell; } public static Int64Value TableHeight(A.Table table) { long height = 0; foreach (var row in table.Descendants<A.TableRow>() .Where(h => h.Height.HasValue)) { height += row.Height.Value; } return height; } This correctly adds the new table row to the existing table, but when I try and get the height of the table, it returns the original height and not the new height. The new height meaning the default height I initially set and not the height after a large amount of text has been inserted. It seems the height only gets readjusted when it is opened in power point. I have also tried accessing the height of the largest table cell in the row, but can't seem to find the right property to perform that task. My question is how do you determine the height of a dynamically added table row since it doesn't seem to update the height of the row until it is opened in power point? Any other ways to determine when to split content to another slide while using Open XML SDK 2.0? I'm open to any suggestion on a better approach someone might have taken since there isn't much documentation on this subject.

    Read the article

  • Android Augmented Reality

    - by Azooz Totti
    I'm working on my first Android Augmented Reality application. The application works pretty good if the ARActivity runs as the first class (android.intent.category.LAUNCHER) in the manifest file. But when I added a splash screen which means the ARActivity will be the second to run(android.intent.category.DEFAULT), the camera seems not detecting the marker. I believe the problem is all in the manifest file. Any suggestions ? Thanks This is the manifest.xml <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.ar.armarkers" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="8" android:targetSdkVersion="15" /> <uses-permission android:name="android.permission.CAMERA" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-feature android:name="android.hardware.camera" /> <uses-feature android:name="android.hardware.camera.autofocus" /> <application android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <activity android:name=".Splash" android:screenOrientation="landscape" android:theme="@android:style/Theme.Black.NoTitleBar.Fullscreen" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name="com.ar.armarkers.MainActivity" android:clearTaskOnLaunch="true" android:label="@string/title_activity_main" android:noHistory="true" android:screenOrientation="landscape" > <intent-filter> <action android:name="com.ar.armarkers.MAINACTIVITY" /> <category android:name="android.intent.category.DEFAULT" /> </intent-filter> </activity> </application> </manifest> MainActivity.java import edu.dhbw.andar.ARObject; import edu.dhbw.andar.ARToolkit; import edu.dhbw.andar.AndARActivity; import edu.dhbw.andar.exceptions.AndARException; import edu.dhbw.andar.pub.CustomRenderer; import android.os.Bundle; import android.util.Log; public class MainActivity extends AndARActivity { private ARObject someObject; private ARToolkit artoolkit; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); CustomRenderer renderer = new CustomRenderer(); setNonARRenderer(renderer); try { artoolkit = getArtoolkit(); someObject = new CustomObject1("test", "marker16.pat", 80.0, new double[] { 0, 0 }); artoolkit.registerARObject(someObject); someObject = new CustomObject2 ("test", "marker17.patt", 80.0, new double[]{0,0}); artoolkit.registerARObject(someObject); } catch (AndARException ex) { System.out.println(""); } startPreview(); } public void uncaughtException(Thread thread, Throwable ex) { // TODO Auto-generated method stub Log.e("AndAR EXCEPTION", ex.getMessage()); finish(); } } Splash.java import android.app.Activity; import android.app.ProgressDialog; import android.content.Intent; import android.os.Bundle; import android.view.Gravity; public class Splash extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub super.onCreate(savedInstanceState); setContentView(R.layout.splash); Thread timer = new Thread() { public void run() { try { sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } finally { Intent i = new Intent(getApplicationContext(), MainActivity.class); startActivity(i); } } }; timer.start(); } }

    Read the article

  • Rails help looping trough has one and belongs to association

    - by Rails beginner
    This is my kategori controller show action: def show @kategori = Kategori.find(params[:id]) @konkurrancer = @kategori.konkurrancer respond_to do |format| format.html # show.html.erb format.xml { render :xml => @kategori } end end This is kategori view show file: <% @konkurrancer.each do |vind| %> <td><%= vind.name %></td> <td>4 ud af 5</td> <td><%= number_to_currency(vind.vaerdi, :unit => "DKK", :separator => ".", :delimiter => ".", :format => "%n %u", :precision => 0) %></td> <td>2 min</td> <td>Nyhedsbrev</td> <td><%= vind.udtraekkes.strftime("%d %B") %></td> </tr> <% end %> My kategori model: class Kategori < ActiveRecord::Base has_one :konkurrancer end My konkurrancer model: class Konkurrancer < ActiveRecord::Base belongs_to :kategori end I want show all of the konkurrancer that have an association to the kategori model With my code I get the following error: NoMethodError in Kategoris#show Showing C:/Rails/konkurranceportalen/app/views/kategoris/show.html.erb where line #12 raised: undefined method `each' for "#":Konkurrancer

    Read the article

  • In XSLT is it possible to use the value of an xpath expression in a call to a template using an par

    - by Cell
    I am performing an xsl transform and in it I call a template with a param using the following code <xsl:call-template name="GenerateColumns"> <xsl:with-param name="curRow" select="$curRow"/> <xsl:with-param name="curCol" select="$curCol + 1"/> </xsl:call-template> This calls a template function which outputs part of a table element in HTML. The curRow and curCol are used to determine which row and column we are in the table. gbl_maxCols is set to the number of columns in an html table <xsl:template name="GenerateColumns"> <xsl:when test="$curCol &lt;= $gbl_maxCols"> <td> <xsl:attribute="colspan"> <xsl:value-of select="/page/column/@skipColumns"/> </xsl:attribute> </xsl:when> </xsl:template> The result of this function is a set of td elements, however some of these elements (those with a skipColumn attribute greater than 1 span more than 1 column, I need to skip this many columns with the next call to generateColumns. this works just like I would expect in the case where I simply increment the curCol param but I have a case where I need to use the value from the xml attribute skipColumns in the math to calculate the value for curCol. In the above case I iterate through all the columns and this works for the majority of my use cases. However in same cases I need to skip over some of the columns and need to pass in that value from the xml attribute to calculate how many columns I need to skip. My naive first attempt was something like this <xsl:call-template name="GenerateColumns"> <xsl:with-param name="curRow" select="$curRow"/> <xsl:with-param name="curCol" select="$curCol + /page/column/@skipColumns"/> </xsl:call-template> But unforutnately this does not seem to work. Is there any way to use an attribute from an xml page in the calculation for the value of a param in xsl. My xml page is something like this (edited heavily since the xml file is rather large) <page> <column name="blank" skipColumns="1"/> <column name="blank" skipColumns="1"/> <column name="test" skipColumns="3"/> <column name="blank" skipColumns="1"/> <column name="test2" skipColumns="6"/> </page> after all of this I would like to have a set of td elements like the following <td></td><td></td><td colSpan="3"></td><td></td><td colSpan="6"></td> if I just iterate through the columns I instead end up with something like this which gives me more td elements than I should have <td></td><td></td><td colSpan="3"></td><td></td><td colSpan="6"></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td> Edited to provide more information

    Read the article

  • Writing fortran robust and "modern" code

    - by Blklight
    In some scientific environments, you often cannot go without FORTRAN as most of the developers only know that idiom, and there is lot of legacy code and related experience. And frankly, there are not many other cross-platform options for high performance programming ( C++ would do the task, but the syntax, zero-starting arrays, and pointers are too much for most engineers ;-) ). I'm a C++ guy but I'm stuck with some F90 projects. So, let's assume a new project must use FORTRAN (F90), but I want to build the most modern software architecture out of it. while being compatible with most "recent" compilers (intel ifort, but also including sun/HP/IBM own compilers) So I'm thinking of imposing: global variable forbidden, no gotos, no jump labels, "implicit none", etc. "object-oriented programming" (modules with datatypes + related subroutines) modular/reusable functions, well documented, reusable libraries assertions/preconditions/invariants (implemented using preprocessor statements) unit tests for all (most) subroutines and "objects" an intense "debug mode" (#ifdef DEBUG) with more checks and all possible Intel compiler checks possible (array bounds, subroutine interfaces, etc.) uniform and enforced legible coding style, using code processing tools C stubs/wrappers for libpthread, libDL (and eventually GPU kernels, etc.) C/C++ implementation of utility functions (strings, file operations, sockets, memory alloc/dealloc reference counting for debug mode, etc.) ( This may all seem "evident" modern programming assumptions, but in a legacy fortran world, most of these are big changes in the typical programmer workflow ) The goal with all that is to have trustworthy, maintainable and modular code. Whereas, in typical fortran, modularity is often not a primary goal, and code is trustworthy only if the original developer was very clever, and the code was not changed since then ! (i'm a bit joking here, but not much) I searched around for references about object-oriented fortran, programming-by-contract (assertions/preconditions/etc.), and found only ugly and outdated documents, syntaxes and papers done by people with no large-scale project involvement, and dead projects. Any good URL, advice, reference paper/books on the subject?

    Read the article

  • Getting "" at the beginning of my XML File after save()

    - by Remy
    I'm opening an existing XML file with c# and I replace some nodes in there. All works fine. Just after I save it, I get the following characters at the beginning of the file:  (EF BB BF in HEX) The whole first line: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> The rest of the file looks like a normal XML file. The simplified code is here: XmlDocument doc = new XmlDocument(); doc.Load(xmlSourceFile); XmlNode translation = doc.SelectSingleNode("//trans-unit[@id='127']"); translation.InnerText = "testing"; doc.Save(xmlTranslatedFile); I'm using a C# WinForm Application. With .NET 4.0. Any ideas? Why would it do that? Can we disable that somehow? It's for Adobe InCopy and it does not open it like this. UPDATE: Alternative Solution: Saving it with the XmlTextWriter works too: XmlTextWriter writer = new XmlTextWriter(inCopyFilename, null); doc.Save(writer);

    Read the article

  • How do I convert Data::Dumper output back into a Perl data structure?

    - by newbee_me
    Hi all! I was wondering if you could shed some lights regarding the code I've been doing for a couple of days. I've been trying to convert a Perl-parsed hash back to XML using the XMLout() and XMLin() method and it has been quite successful with this format. #!/usr/bin/perl -w use strict; # use module use IO::File; use XML::Simple; use XML::Dumper; use Data::Dumper; my $dump = new XML::Dumper; my ( $data, $VAR1 ); Topology:$VAR1 = { 'device' => { 'FOC1047Z2SZ' => { 'ChassisID' => '2009-09', 'Error' => undef, 'Group' => { 'ID' => 'A1', 'Type' => 'Base' }, 'Model' => 'CATALYST', 'Name' => 'CISCO-SW1', 'Neighbor' => {}, 'ProbedIP' => 'TEST', 'isDerived' => 0 } }, 'issues' => [ 'TEST' ] }; # create object my $xml = new XML::Simple (NoAttr=>1, RootName=>'data', SuppressEmpty => 'true'); # convert Perl array ref into XML document $data = $xml->XMLout($VAR1); #reads an XML file my $X_out = $xml->XMLin($data); # access XML data print Dumper($data); print "STATUS: $X_out->{issues}\n"; print "CHASSIS ID: $X_out->{device}{ChassisID}\n"; print "GROUP ID: $X_out->{device}{Group}{ID}\n"; print "DEVICE NAME: $X_out->{device}{Name}\n"; print "DEVICE NAME: $X_out->{device}{name}\n"; print "ERROR: $X_out->{device}{error}\n"; I can access all the element in the XML with no problem. But when I try to create a file that will house the parsed hash, problem arises because I can't seem to access all the XML elements. I guess, I wasn't able to unparse the file with the following code. #!/usr/bin/perl -w use strict; #!/usr/bin/perl # use module use IO::File; use XML::Simple; use XML::Dumper; use Data::Dumper; my $dump = new XML::Dumper; my ( $data, $VAR1, $line_Holder ); #this is the file that contains the parsed hash my $saveOut = "C:/parsed_hash.txt"; my $result_Holder = IO::File->new($saveOut, 'r'); while ($line_Holder = $result_Holder->getline){ print $line_Holder; } # create object my $xml = new XML::Simple (NoAttr=>1, RootName=>'data', SuppressEmpty => 'true'); # convert Perl array ref into XML document $data = $xml->XMLout($line_Holder); #reads an XML file my $X_out = $xml->XMLin($data); # access XML data print Dumper($data); print "STATUS: $X_out->{issues}\n"; print "CHASSIS ID: $X_out->{device}{ChassisID}\n"; print "GROUP ID: $X_out->{device}{Group}{ID}\n"; print "DEVICE NAME: $X_out->{device}{Name}\n"; print "DEVICE NAME: $X_out->{device}{name}\n"; print "ERROR: $X_out->{device}{error}\n"; Do you have any idea how I could access the $VAR1 inside the text file? Regards, newbee_me

    Read the article

  • Passing a variable from Excel 2007 Custom Task Pane to Hosted PowerShell

    - by Uros Calakovic
    I am testing PowerShell hosting using C#. Here is a console application that works: using System; using System.Collections; using System.Collections.Generic; using System.Collections.ObjectModel; using System.Management.Automation; using System.Management.Automation.Runspaces; using Microsoft.Office.Interop.Excel; namespace ConsoleApplication3 { class Program { static void Main() { Application app = new Application(); app.Visible = true; app.Workbooks.Add(XlWBATemplate.xlWBATWorksheet); Runspace runspace = RunspaceFactory.CreateRunspace(); runspace.Open(); runspace.SessionStateProxy.SetVariable("Application", app); Pipeline pipeline = runspace.CreatePipeline("$Application"); Collection<PSObject> results = null; try { results = pipeline.Invoke(); foreach (PSObject pob in results) { Console.WriteLine(pob); } } catch (RuntimeException re) { Console.WriteLine(re.GetType().Name); Console.WriteLine(re.Message); } } } } I first create an Excel.Application instance and pass it to the hosted PowerShell instance as a varible named $Application. This works and I can use this variable as if Excel.Application was created from within PowerShell. I next created an Excel addin using VS 2008 and added a user control with two text boxes and a button to the addin (the user control appears as a custom task pane when Excel starts). The idea was this: when I click the button a hosted PowerShell instance is created and I can pass to it the current Excel.Application instance as a variable, just like in the first sample, so I can use this variable to automate Excel from PowerShell (one text box would be used for input and the other one for output. Here is the code: using System; using System.Windows.Forms; using System.Management.Automation; using System.Management.Automation.Runspaces; using System.Collections.ObjectModel; using Microsoft.Office.Interop.Excel; namespace POSHAddin { public partial class POSHControl : UserControl { public POSHControl() { InitializeComponent(); } private void btnRun_Click(object sender, EventArgs e) { txtOutput.Clear(); Microsoft.Office.Interop.Excel.Application app = Globals.ThisAddIn.Application; Runspace runspace = RunspaceFactory.CreateRunspace(); runspace.Open(); runspace.SessionStateProxy.SetVariable("Application", app); Pipeline pipeline = runspace.CreatePipeline( "$Application | Get-Member | Out-String"); app.ActiveCell.Value2 = "Test"; Collection<PSObject> results = null; try { results = pipeline.Invoke(); foreach (PSObject pob in results) { txtOutput.Text += pob.ToString() + "-"; } } catch (RuntimeException re) { txtOutput.Text += re.GetType().Name; txtOutput.Text += re.Message; } } } } The code is similar to the first sample, except that the current Excel.Application instance is available to the addin via Globals.ThisAddIn.Application (VSTO generated) and I can see that it is really a Microsoft.Office.Interop.Excel.Application instance because I can use things like app.ActiveCell.Value2 = "Test" (this actually puts the text into the active cell). But when I pass the Excel.Application instance to the PowerShell instance what gets there is an instance of System.__ComObject and I can't figure out how to cast it to Excel.Application. When I examine the variable from PowerShell using $Application | Get-Member this is the output I get in the second text box: TypeName: System.__ComObject Name MemberType Definition ---- ---------- ---------- CreateObjRef Method System.Runtime.Remoting.ObjRef CreateObj... Equals Method System.Boolean Equals(Object obj) GetHashCode Method System.Int32 GetHashCode() GetLifetimeService Method System.Object GetLifetimeService() GetType Method System.Type GetType() InitializeLifetimeService Method System.Object InitializeLifetimeService() ToString Method System.String ToString() My question is how can I pass an instance of Microsoft.Office.Interop.Excel.Application from a VSTO generated Excel 2007 addin to a hosted PowerShell instance, so I can manipulate it from PowerShell? (I have previously posted the question in the Microsoft C# forum without an answer)

    Read the article

  • How to fix this java.lang.LinkageError?

    - by Péter Török
    I am trying to configure a custom layout class to Log4J as described in my previous post. The class uses java.util.regex.Matcher to identify potential credit card numbers in log messages. It works perfectly in unit tests (I can also programmatically configure a logger to use it and produce the expected output). However when I try to deploy it with our app in JBoss, I get the following error: --- MBEANS THAT ARE THE ROOT CAUSE OF THE PROBLEM --- ObjectName: jboss.web.deployment:war=MyWebApp-2010_02-SNAPSHOT.war,id=476602902 State: FAILED Reason: java.lang.LinkageError: java/util/regex/Matcher I couldn't even find any info on this form of the error - typically LinkageError seems to show up with a "loader constrain violation" message, like in here. Technical details: we use JBoss 4.2, Java 5, Log4J 1.2.12. We deploy our app in an .ear, which contains (among others) the above mentioned .war file, and the custom layout class in a separate jar file. We override the default settings in jboss-log4J.xml with our own log4j.properties located in a different folder, which is added to the classpath at startup, and is provided via Carbon. I can only guess: are two different Matcher class versions loaded from somewhere, or is Matcher loaded by two different classloaders when it is used from the jar and the war? What does this error message mean, and how can I fix it?

    Read the article

  • Dependecy Injection with Massive ORM: dynamic trouble

    - by Sergi Papaseit
    I've started working on an MVC 3 project that needs data from an enormous existing database. My first idea was to go ahead and use EF 4.1 and create a bunch of POCO's to represent the tables I need, but I'm starting to think the mapping will get overly complicated as I only need some of the columns in some of the tables. (thanks to Steven for the clarification in the comments. So I thought I'd give Massive ORM a try. I normally use a Unit of Work implementation so I can keep everything nicely decoupled and can use Dependency Injection. This is part of what I have for Massive: public interface ISession { DynamicModel CreateTable<T>() where T : DynamicModel, new(); dynamic Single<T>(string where, params object[] args) where T : DynamicModel, new(); dynamic Single<T>(object key, string columns = "*") where T : DynamicModel, new(); // Some more methods supported by Massive here } And here's my implementation of the above interface: public class MassiveSession : ISession { public DynamicModel CreateTable<T>() where T : DynamicModel, new() { return new T(); } public dynamic Single<T>(string where, params object[] args) where T: DynamicModel, new() { var table = CreateTable<T>(); return table.Single(where, args); } public dynamic Single<T>(object key, string columns = "*") where T: DynamicModel, new() { var table = CreateTable<T>(); return table.Single(key, columns); } } The problem comes with the First(), Last() and FindBy() methods. Massive is based around a dynamic object called DynamicModel and doesn't define any of the above method; it handles them through a TryInvokeMethod() implementation overriden from DynamicObject instead: public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result) { } I'm at a loss on how to "interface" those methods in my ISession. How could my ISession provide support for First(), Last() and FindBy()? Put it another way, how can I use all of Massive's capabilities and still be able to decouple my classes from data access?

    Read the article

  • Common practice for higher-order-polymorphism in scala

    - by raichoo
    Hi, I'm trying to grasp higher-order-polymophism in scala by implementing a very basic interface that describes a monad but I come across a problem that I don't really understand. I implemented the same with C++ and the code looks like this: #include <iostream> template <typename T> class Value { private: T value; public: Value(const T& t) { this->value = t; } T get() { return this->value; } }; template < template <typename> class Container > class Monad { public: template <typename A> Container<A> pure(const A& a); }; template <template <typename> class Container> template <typename A> Container<A> Monad<Container>::pure(const A& a) { return Container<A>(a); } int main() { Monad<Value> m; std::cout << m.pure(1).get() << std::endl; return 0; } When trying to do the same with scala I fail: class Value[T](val value: T) class Monad[Container[T]] { def pure[A](a: A): Container[A] = Container[A](a) } object Main { def main(args: Array[String]): Unit = { val m = new Monad[Value] m.pure(1) } } The compiler complains about: [raichoo@lain:Scala]:434> scalac highorder.scala highorder.scala:5: error: not found: value Container Container[A](a) ^ one error found What am I doing wrong here? There seems to be a fundamental concept I don't seem to understand about scala typeconstructors. Regards, raichoo

    Read the article

  • SQL UNION ALL problem after using UNION ALL more than 10 times

    - by VBGKM
    I'm getting a formatting problem if I use more than 10 UNION ALL statements in my VBA Code. If I use 10 or less everything works great. What I'm trying to do is combine 12 worksheets (Excel 2007). I have a numerical column called SC that turns into string and date if I have more than 10 UNION ALL. If I try to use ROUND with more than 10 UNION ALL my last selection will change all the records by one unit. I'm using Microsoft.ACE.OLEDB.12.0 as my provider and my connection string has worked for several things in my code so far. Is there any limit for UNION ALL statements when using OLEDB? Here is my code. Dim StrOr As String Dim i As Variant Dim Cnt As ADODB.Connection Dim Rs As ADODB.Recordset For i = 1 To 12 StrOr = StrOr & " " & "SELECT SC FROM [" & MonthName(i, True) & "$" & "] UNION ALL" Next StrOr = Left(StrOr, Len(StrOr) - 9) & ";" Call GetADOCnt Call ADORs

    Read the article

< Previous Page | 682 683 684 685 686 687 688 689 690 691 692 693  | Next Page >