Search Results

Search found 16948 results on 678 pages for 'static analysis'.

Page 391/678 | < Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >

  • windows 7 losing wired ethernet connection

    - by Brandon Grossutti
    i have a win 7 machine with an Atheros L1 Gigabit Ethernet 10/100/1000Base-T Controller with latest drivers, the machine is upto date with all latest fixes etc. I have it connecting to a WRT310Nv2 router. Seemingly random, win 7 disconnects from the "Home Network" says its in a "public network", then resets the connection to an illegal 169 address. I have tried static ips, dhcp, all with the same results. This seems to have started shortly after i installed Vuze, so I uninstalled it but the problem persists. I know that the router is sound given that I have an XP machine attached with no issues of connectivity at all. I am at a complete loss and have tried everything, pleasse tell me i'm not the only one.

    Read the article

  • windows 7 losing wired ethernet connection

    - by Brandon Grossutti
    i have a win 7 machine with an Atheros L1 Gigabit Ethernet 10/100/1000Base-T Controller with latest drivers, the machine is upto date with all latest fixes etc. I have it connecting to a WRT310Nv2 router. Seemingly random, win 7 disconnects from the "Home Network" says its in a "public network", then resets the connection to an illegal 169 address. I have tried static ips, dhcp, all with the same results. This seems to have started shortly after i installed Vuze, so I uninstalled it but the problem persists. I know that the router is sound given that I have an XP machine attached with no issues of connectivity at all. I am at a complete loss and have tried everything, pleasse tell me i'm not the only one.

    Read the article

  • Coarse Collision Detection in highly dynamic environment

    - by Millianz
    I'm currently working a 3D space game with A LOT of dynamic objects that are all moving (there is pretty much no static environment). I have the collision detection and resolution working just fine, but I am now trying to optimize the collision detection (which is currently O(N^2) -- linear search). I thought about multiple options, a bounding volume hierarchy, a Binary Spatial Partitioning tree, an Octree or a Grid. I however need some help with deciding what's best for my situation. A grid seems unfeasible simply due to the space requirements and cache coherence problems. Since everything is so dynamic however, it seems to be that trees aren't ideal either, since they would have to be completely rebuilt every frame. I must admit I never implemented a physics engine that required spatial partitioning, do I indeed need to rebuild the tree every frame (assuming that everything is constantly moving) or can I update the trees after integrating? Advice is much appreciated - to give some more background: You're flying a space ship in an asteroid field, and there are lots and lots of asteroids and some enemy ships, all of which shoot bullets. EDIT: I came across the "Sweep an Prune" algorithm, which seems like the right thing for my purposes. It appears like the right mixture of fast building of the data structures involved and detailed enough partitioning. This is the best resource I can find: http://www.codercorner.com/SAP.pdf If anyone has any suggestions whether or not I'm going in the right direction, please let me know.

    Read the article

  • Which problem(s) do YOU want to see solved?

    - by buu700
    My team and I are meeting tonight to come up with a business plan and some community input would be amazing. I've been mulling over this issue for the past few months and bouncing ideas off of others, and now I'd finally like some input from the community. I have come up with a fair selection of ideas, but most of those amount to either fun projects which could potentially be profitable, or otherwise solid business models that have one or two major hurdles (usually related to resources or legality). For our team meeting tonight, my idea is to take inventory of our available skills, resources, and compelling problems which interest us. The last is where I would greatly appreciate some community input. Hell, even entire business ideas/plans would be appreciated. No matter how big or small your thoughts, any input would be appreciated. We're a team of computer scientists, so our business will be primarily based around software/technology/Web solutions. Among my relevant available resources (entire Internet aside), I have the following: A pretty reliable connection to an SEO company a large production company. A stash of fairly powerful server hardware. A fast network with static IPs. The backend for Hackswipe, which includes credit card payment processing and a Google Voice-based SMS gateway. This work in progress design for something completely unrelated but which is backed by some fairly decent infrastructure. Direct access to the experts in just about any relevant field (on-campus Carnegie Mellon professors). A sexual relationship with the baron of a small nation. For further down the line, some investor relationships. Not likely to be so relevant, but a decent social media presence (Stack Overflow reputation, modship in some major reddits, various tech forums). The source code for Eugene fucking McCabe. Pooled with the other team members, the list of projects we can build off of would be longer (including an Android app). So, what are your thoughts? Crossposted to reddit

    Read the article

  • Java game object pool management

    - by Kenneth Bray
    Currently I am using arrays to handle all of my game objects in the game I am making, and I know how terrible this is for performance. My question is what is the best way to handle game objects and not hurt performance? Here is how I am creating an array and then looping through it to update the objects in the array: public static ArrayList<VboCube> game_objects = new ArrayList<VboCube>(); /* add objects to the game */ while (!Display.isCloseRequested() && !Keyboard.isKeyDown(Keyboard.KEY_ESCAPE)) { for (int i = 0; i < game_objects.size(); i++){ // draw the object game_objects.get(i).Draw(); game_objects.get(i).Update(); //world.updatePhysics(); } } I am not looking for someone to write me code for asset or object management, just point me into a better direction to get better performance. I appreciate the help you guys have provided me in the past, and I dont think I would be as far along with my project without the support on stack exchange!

    Read the article

  • PLINQ Adventure Land - WaitForAll

    - by adweigert
    PLINQ is awesome for getting a lot of work done fast, but one thing I haven't figured out yet is how to start work with PLINQ but only let it execute for a maximum amount of time and react if it is taking too long. So, as I must admit I am still learning PLINQ, I created this extension in that ignorance. It behaves similar to ForAll<> but takes a timeout and returns false if the threads don't complete in the specified amount of time. Hope this helps someone else take PLINQ further, it definitely has helped for me ...  public static bool WaitForAll<T>(this ParallelQuery<T> query, TimeSpan timeout, Action<T> action) { Contract.Requires(query != null); Contract.Requires(action != null); var exception = (Exception)null; var cts = new CancellationTokenSource(); var forAllWithCancellation = new Action(delegate { try { query.WithCancellation(cts.Token).ForAll(action); } catch (OperationCanceledException) { // NOOP } catch (AggregateException ex) { exception = ex; } }); var mrs = new ManualResetEvent(false); var callback = new AsyncCallback(delegate { mrs.Set(); }); var result = forAllWithCancellation.BeginInvoke(callback, null); if (mrs.WaitOne(timeout)) { forAllWithCancellation.EndInvoke(result); if (exception != null) { throw exception; } return true; } else { cts.Cancel(); return false; } }

    Read the article

  • The Arab HEUG is now a reality, and other random thoughts

    - by user9147039
    I just returned from Doha, Qatar where the first of its kind HEUG (Higher Education User Group) meeting for institutions in the Middle East and North Africa was held at Qatar University and jointly hosted by Damman University from Saudi Arabia. Over 80 delegates attended including representation from education institutions in Oman, Saudi Arabia, Lebanon, and Qatar. There are many other regional HEUG organizations in place (in Australia/New Zealand, APAC, EMEA, as well as smaller regional HEUG’s in the Netherlands, South Africa, and in regions of the US), but it was truly an accomplishment to see this Middle East/North Africa group organize and launch their chapter with a meeting of this quality. To be known as the Arab HEUG going forward, I am excited about the prospects for sharing between the institutions and for the growth of Oracle solutions in the region. In particular the hosts for the event (Qatar University) did a masterful job with logistics and organization, and the quality of the event was a testament to their capabilities. Among the more interesting and enlightening presentations I attended were one from Dammam University on the lessons learned from their implementation of Campus Solutions and transition off of Banner, as well as the use by Qatar University E-business Suite for grants management (both pre-and post-award). The most notable fact coming from this latter presentation was the fit (89%) of e-Business Suite Grants to the university’s requirements. In a few weeks time we will be convening the 5th meeting of the Oracle Education & Research Industry Strategy Council in Redwood Shores (5th since my advent into my current role). The main topics of discussion will be around our Higher Education Applications Strategy for the future (including cloud approaches to ERP (HCM, Finance, and Student Information Systems), how some cases studies on the benefits of leveraging delivered functionality and extensibility in the software (versus customization). On the second day of the event we will turn our attention to Oracle in Research and also budgeting and planning in higher education. Both of these sessions will include significant participation from council members in the form of panel discussions. Our EVP’s for Systems (John Fowler) and for Global Cloud Services and North America application sales (Joanne Olson) will join us for the discussion. I recently read a couple of articles that were surprising to me. The first was from Inside Higher Ed on October 15 entitled, “As colleges prepare for major software upgrades, Kuali tries to woo them from corporate vendors.” It continues to disappointment that after all this time we are still debating whether it is better to build enterprise software through open or community source initiatives when fully functional, flexible, supported, and widely adopted options exist in the marketplace. Over a decade or more ago when these solutions were relatively immature and there was a great deal of turnover in the market I could appreciate the initiatives like Kuali. But let’s not kid ourselves – the real objective of this movement is to counter a perceived predatory commercial software industry. Again, when commercial solutions are deployed as written without significant customization, and standard business processes are adopted, the cost of these solutions (relative to the value delivered) is quite low, and certain much lower than the massive investment (and risk) in in-house developers to support a bespoke community source system. In this era of cost pressures in education and the need to refocus resources on teaching, learning, and research, I believe it’s bordering on irresponsible to continue to pursue open-source ERP. Many of the adopter’s total costs are staggering and have little to show for their efforts and expended resources. The second article was recently in the Chronicle of Higher Education and was entitled “’Big Data’ Is Bunk, Obama Campaign’s Tech Guru Tells University Leaders.” This one was so outrageous I almost don’t want to legitimize it by referencing it here. In the article the writer relays statements made by Harper Reed, President Obama’s former CTO for his 2012 re-election campaign, that big data solutions in education have no relevance and are akin to snake oil. He goes on to state that while he’s a fan of data-driven decision making in education, most of the necessary analysis can be accomplished in Excel spreadsheets. Yeah… right. This is exactly what ails education (higher education in particular). Dozens of shadow and siloed systems running on spreadsheets with limited-to-no enterprise wide initiatives to harness the data-rich environment that is a higher ed institution and transform the data into useable information. I’ll grant Mr. Reed that “Big Data” is overused and hackneyed, but imperatives like improving student success in higher education are classic big data problems that data-mining and predictive analytics can address. Further, higher ed need to be producing a massive amount more data scientists and analysts than are currently in the pipeline, to further this discipline and application of these tools to many many other problems across multiple industries.

    Read the article

  • Does Apache2 Configured as ReverseProxy Hide Cookies Set by Backend Servers?

    - by Ianthe
    I use Apache 2.2.16 as Reverse Proxy. For a static website, I don't have any issues. However, when began to use cookies, I've noticed that cookies are not being sent to the client. Here's a snippet of my config: <VirtualHost *:80> ServerName app.somewhere.com:80 ServerAlias app ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /app http://10.x.x.x/app ProxyPassReverse /app http://10.x.x.x/app <Location /> Order allow,deny Allow from all </Location> </VirtualHost> But when I try to access the app server directly, I receive the cookies ok. Is this an expected behaviour for Apache2? I'm using HAProxy for another application that sends cookies to the client and I get all of them.

    Read the article

  • can ping, but not SSH

    - by Matt
    So I have NetworkManager, connected to an AP on wlan1. I have wlan0 connected to a AdHoc network. I have Firestarter sharing my inet on the Adhoc. I have my ipod connected to wlan0, IP 10.42.43.101. wlan0 Link encap:Ethernet HWaddr ac:xx:12:81:7f:xx inet addr:10.42.43.1 Bcast:10.255.255.255 Mask:255.0.0.0 wlan1 Link encap:Ethernet HWaddr 00:xx:b3:98:f2:xx inet addr:10.0.1.61 Bcast:10.0.1.255 Mask:255.255.255.0 Now, I can ping my Jailbroken, SSH-enabled and running ipod touch: matt: ~ $ ping 10.42.43.101 PING 10.42.43.101 (10.42.43.101) 56(84) bytes of data. 64 bytes from 10.42.43.101: icmp_req=1 ttl=64 time=168 ms 64 bytes from 10.42.43.101: icmp_req=2 ttl=64 time=256 ms 64 bytes from 10.42.43.101: icmp_req=3 ttl=64 time=151 ms ^C --- 10.42.43.101 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 151.465/191.979/256.316/46.003 But I cannot SSH it: $ ssh [email protected] -vv OpenSSH_5.8p1 Debian-1ubuntu3, OpenSSL 0.9.8o 01 Jun 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 10.42.43.101 [10.42.43.101] port 22. It just stays there till I ^C it.. Here's my routing: $ ip route show 10.0.1.0/24 dev wlan1 proto kernel scope link src 10.0.1.61 metric 2 10.0.1.0/24 dev wlan1 proto kernel scope link src 10.0.1.61 metric 319 169.254.0.0/16 dev vboxnet0 proto kernel scope link src 169.254.128.223 metric 204 10.0.0.0/8 dev wlan0 proto kernel scope link src 10.42.43.1 default via 10.0.1.1 dev wlan1 proto static default via 10.0.1.1 dev wlan1 metric 319

    Read the article

  • 3CX behind UT7.1 using a callcentric.com SIP account

    - by Corey
    Has anyone had any luck with getting 3CX working behind UT7.1 with a SIP account from callcentric.com? I am willing to reset my current UT box back to defaults, and start from there. I have a static public IP assigned to the external interface. My internal addressing is 192.168.76.0 . My 3CX box has 192.168.76.17 . Would anyone be willing to give me a step by step of changes to make in UT / 3CX. I currently have my UT box unplugged, and have replaced it with a Linksys unit. I have port forwarding setup for… TCP/UDP 5060 to 192.168.76.17 UDP 9000-9049 to 192.168.76.17 … and everything works great. I also have additional external IPs available if that helps.

    Read the article

  • program logic of printing the prime numbers

    - by Vignesh Vicky
    can any body help to understand this java program it just print prime n.o ,as you enter how many you want and it works good class PrimeNumbers { public static void main(String args[]) { int n, status = 1, num = 3; Scanner in = new Scanner(System.in); System.out.println("Enter the number of prime numbers you want"); n = in.nextInt(); if (n >= 1) { System.out.println("First "+n+" prime numbers are :-"); System.out.println(2); } for ( int count = 2 ; count <=n ; ) { for ( int j = 2 ; j <= Math.sqrt(num) ; j++ ) { if ( num%j == 0 ) { status = 0; break; } } if ( status != 0 ) { System.out.println(num); count++; } status = 1; num++; } } } i dont understand this for loop condition for ( int j = 2 ; j <= Math.sqrt(num) ; j++ ) why we are taking sqrt of num...which is 3....why we assumed it as 3?

    Read the article

  • Reset / Remove - Google Keywords

    - by Herr Kaleun
    Summary: My site is ranking for filthy keywords and i would like to remove them from google ranking/keywords. Background: My server was hacked using the timthumb exploit/security vulnerability, apparently i was the last person on earth to read the news about the exploit, several months after it appeared. Anyway, the "hacker" was so friendly to modify the index.php file in such a fashion, that it generated random sexual oriented keywords if the website is fetched as google-bot. So if you would fetch it as google bot/it gets indexed, you would get randomly generated keywords like: sex videos teenager teen sex adult sex preteen A LINK TO A RANDOM CONTENT OF MY WEBPAGE anime sex videos a rough list something similar to that, about 180-200 per page. I've discovered it far too late, so that google had me indexed for the words "sex" and certain adult oriented keywords, about roughly 2000. I've removed all the content, toke the site down, replaced the index.php with a static HTML and added a "ERROR 410" title to the website so that the content is no longer here and removed permanently. I've also applied for a manual review of my website, about 1.5 months ago but still, the keywords are there, and very strange, some of the keyword rankings actually "improve" over time. Here are some screenshots from webmasters tools: Question: How can i remove this filthy keywords and re-rank my website as a "normal" website on the fastest way? I want to "REMOVE" the keywords if possible. Please help me or point me into a direction. Thank you

    Read the article

  • Supporting copy 'n paste in your Windows Phone app

    - by Daniel Moth
    Some Windows Phone 7 owners already have the NoDo update, and others are getting it soon. This update brings, among other things, copy & paste support for text boxes. The user taps on a piece of text (and can drag in either direction to select more/less words), a popup icon appears that when tapped copies the text to the clipboard, and then at any app that shows the soft input panel there is an icon option to paste the copied text into the associated textbox. For more read this 'how to'. Note that there is no programmatic access to the clipboard, only the end user experience I just summarized, so there is nothing you need to do for your app's textboxes to support copy & paste: it just works. The only issue may be if in your app you use static TextBlock controls, for which the copy support will not appear, of course. That was the case with my Translator by Moth app where the translated text appears in a TextBlock. So, I wanted the user to be able to copy directly from the translated text (without offering an editable TextBox for an area where user input does not make sense). Take a look at a screenshot of my app before I made any changes to it. I then made changes to it preserving the look and feel, yet with additional copy support (see screenshot on the right)! So how did I achieve that? Simply by using my co-author's template (thanks Peter!): Copyable TextBlock for Windows Phone.   (aside: in my app even without this change there is a workaround, the user could use the "swap" feature to swap the source and target, so they can copy from the text box) Comments about this post welcome at the original blog.

    Read the article

  • Monitor network connectivity in WP7 apps

    - by Daniel Moth
    Most interesting Windows Phone apps rely on some network service for their functionality. So at some point in your app you may need to know programmatically if there is network connection available or not. For example, the Translator by Moth app relies on the Bing Translation service for translations. When a request for translation (text or voice) is made, the network call may fail. The failure reason is not evident from any of the return results, so I check the connection to see if it is present. Dependent on that, a different message is shown to the user. Before the translation phase is even reached, at the app start up time the Bing service is queried for its list of  languages; in that case I don't want to show the user a message and instead want to be notified when the network is available in order to send the query out again. So for those two requirements (which I imagine are common in other apps) I wrote a simple wrapper MyNetwork static class to the framework APIs: Call once MyNetwork.MonitorNetworkAvailability() method so it monitors the network change At any time check the MyNetwork.IsConnected property to check for network presence Subscribe to its MyNetwork.ConnectionEstablished event Optionally, during debugging use its MyNetwork.ChangeStatus method to simulate a change in network status As usual, there may be better ways to achieve this, but this class works perfectly for my scenarios. You can download the code here: MyNetworks.cs. Comments about this post welcome at the original blog.

    Read the article

  • Taking a Flying Leap

    - by Lance Shaw
    Yesterday, I went skydiving with three of my children.  It was thrilling, scary, invigorating and exciting. While there is obvious risk involved, the reward and feeling of success was well worth it. You might already be wondering what skydiving would have to with WebCenter, so let me explain. Implementing a skydiving program and becoming an instructor does not happen overnight.  It does not happen with the purchase of the needed technology. Not one of us would go out, buy a parachute, the harnesses, helmet and all the gear and be able to convince anyone that we are now ready to be a skydiving instructor. The fact is that obtaining the technology is merely a small piece of the overall process and so is the case with managing content in your company. You don't just buy the right software (Oracle WebCenter Content) and go to your boss and declare information management success. There is planning, research and effort that goes into deploying software of any kind and especially when it is as mission-critical to the success of your business as Enterprise Content Management. To become a certified skydiving instructor takes at least 3 years of commitment and often longer. In the United States, candidates must complete over 500 solo jumps of their own over a minimum of 36 months and then must complete additional rigorous training under observation.  When you consider the amount of time and effort involved, it's not unlike getting a college degree and anyone that has trusted their lives to one of these instructors will no doubt appreciate their dedication to the curriculum.  Implementing an ECM system won't take that long, but it certainly requires commitment, analysis and consideration. But guess what?  Humans are involved and that means that mistakes can happen and that rules change.  This struck me while reading an excellent post on darkreading.com by Glenn S. Phillips entitled "Mission Impossible: 4 Reasons Compliance is Impossible".  His over-arching point was that with information management and security, environments change and people are involved meaning the work is never done.  He stated that you can never claim your compliance efforts are complete because of the following reasons. People are involved.  And lets face it, some are more trustworthy than others. Change is Constant. There is always some new technology coming along that is disruptive. Consumer grade cloud file sharing and sync tools come to mind here. Compliance is interpreted, not defined.  Laws and the judges that read them are always on the move. Technology is a tool, not a complete solution. There is no magic pill. The skydiving analogy holds true here as well.  Ultimately, a single person packs your parachute.  For obvious reasons, you prefer that this person be trustworthy but there are no absolute guarantees of a 100% error-free scenario.  Weather and wind conditions are never a constant and the best-laid plans for a great day of skydiving are easily disrupted by forces outside of your control.  Rules and regulations vary by location and may be updated at any time and as I mentioned early on, even the best technology on its own will only get you started. The good news is that, like skydiving, with the right technology, the right planning, the right team and a proper understanding of the rules and regulations that govern your industry, your ECM deployment can be a great success.  Failure to plan for any of the 4 factors that Glenn outlined in his article will certainly put your deployment and maybe even your company at risk, so consider them carefully. As a final aside, for those of you who consider skydiving an incredibly dangerous and risky pastime, consider this comparative statistic.  In 2012, the U.S. Parachute Association recorded 19 fatal skydiving accidents in the U.S. out of roughly 3.1 million jumps.  That’s 0.006 fatalities per 1,000 jumps. By comparison, the U.S. National Highway Traffic Safety Administration reports that there were 34,080 deaths due to car accidents in 2012.  Based on the percentages, one could argue that it is safer to jump out of a plane than to drive to the airport where the skydiving will take place. While the way you manage, secure, classify, control, retain and dispose of company files may not carry as much risk as driving or skydiving, it certainly carries risk for the organization when not planned and deployed appropriately.  Consider all the factors involved in your organization as you make your content management plans.  For additional areas of consideration, be sure to download our free whitepaper on the topic entitled "The Top 10 Criteria for Choosing an ECM System" which is available for download here.

    Read the article

  • Exposing a WebServer behind a firewall without Port Forwarding

    - by pbreault
    We are deploying web applications in java using tomcat on client machines across the country. Once they are installed, we want to allow a remote access to these web applications through a central server, but we do not want our clients to have to open ports on their routers. Is there a way to tunnel the http traffic so that people connected to the central server can access the web applications that are behind a firewall ? The central server has a static ip address and we have full control over it. Right now, it is a windows box but it could be changed to a linux box if necessary. Our clients are running windows xp and up. We don't need to access the filesystem, we only want to access the web application through a browser. We have looked at reverse ssh tunneling but it shows scaling problem since every packet would have to pass through the central server.

    Read the article

  • Partial recalculation of visibility on a 2D uniform grid

    - by Martin Källman
    Problem Imagine that we have a 2D uniform grid of dimensions N x N. For this grid we have also pre-computed a visibility look-up table, e.g. with DDA, which answers the boolean query is cell X visible from cell Y? The look-up table is a complete graph KN of the cells V in the grid, with each edge E being a binary value denoting the visibility between its vertices. Question If any given cell has its visibility modified, is it possible to extract the subset Edelta of edges which must have their visibility recomputed due to the change, so as to avoid a full-on recomputation for the entire grid? (Which is N(N-1) / 2 or N2 depending on the implementation) Update If is not possible to solve thi in closed form, then maintaining a separate mapping of each cell and every cell pair who's line intersects said cell might also be an option. This obviously consumes more memory, but the data is static. The increased memory requirement could be reduced by introducing a hierarchy, subdividing the grid into smaller parts, and by doing so the above mapping can be reused for each sub-grid. This would come at a cost in terms of increased computation relative to the number of subdivisions; also requiring a resumable ray-casting algorithm.

    Read the article

  • Node.JS testing with Jasmine, databases, and pre-existing code

    - by Jim Rubenstein
    I've recently built the start of a core system which is likely going turn into a monster product. I'm building the system with node.js, and decided after I got a small base built, that It'd be a great idea to start using some sort of automated test suite to test the application. I decided to use jasmine, as it seems pretty solid and has a lot of features for stubbing spying and mocking methods and classes. The application has a lot of external data stores and api access (kestrel, mysql, mongodb, facebook, and more). My issue is, I've got a good amount of code written that I want to start testing - as it represents the underpinnings of the application. What are the best practices for testing methods/classes that access external APIs that I may or may not have control over? As an example, I have a data structure that fetches a bunch of data from a MySQL database. I want to test the method that retrieves the data; and I'm not sure how to go about it. I could test the fetch method which is supposed to return an array of objects, but to isolate the method from the database, I need to define my own fixture data. So what I end up doing is stubbing the mysql execution, and returning a static dataset. So, I end up writing a function that returns the dataset that makes my test pass. That doesn't seem to actually test the code, other than verifying a method is being called. I know this is kind of abstract and vague, it seems that the idea of testing is very much abstract though, so hopefully someone has some experience and can guide me in the right direction. Any advice, or reading I can do is more than welcomed. Thanks in advance.

    Read the article

  • cygwin GNU make .net program piping inconsistent behavior

    - by Codism
    This question may sound like a superuser question but I feel there is something related to programming. Anyway, my computer had a fresh installation of Win 7 64 and cygwin recently. Now I observed some problem with pipe handling in gnu make file. The following is the Makefile I use to reproduce the issue: all: fsutil | cat my-dotnet-console.exe | cat The problem is: for the first command line, the piping works every time but for the second command line, the piping barely works - I got no result for the second command for most cases, regardless of the environment (cmd or bash) in which the make file is invoked. However, if I copy paste the second command line into cmd (or bash), the pipe works every time. The following is my .net program: static void Main(string[] args) { Console.WriteLine(new string('a', 40)); Console.Out.Flush(); } The make version is 3.82.90 but the same problem was observed in a previous version (because of the windows path handling problem in 3.82.9, I replaced make.exe with a previous version). I don't know the exact cygwin version I have installed but the current version on cygwin.com is 1.7.11-1. Currently, my work around is to redirect the output to a temporary file but it would be great if I can avoid the temporary file. Thanks

    Read the article

  • ODI 12c - Aggregating Data

    - by David Allan
    This posting will look at the aggregation component that was introduced in ODI 12c. For many ETL tool users this shouldn't be a big surprise, its a little different than ODI 11g but for good reason. You can use this component for composing data with relational like operations such as sum, average and so forth. Also, Oracle SQL supports special functions called Analytic SQL functions, you can use a specially configured aggregation component or the expression component for these now in ODI 12c. In database systems an aggregate transformation is a transformation where the values of multiple rows are grouped together as input on certain criteria to form a single value of more significant meaning - that's exactly the purpose of the aggregate component. In the image below you can see the aggregate component in action within a mapping, for how this and a few other examples are built look at the ODI 12c Aggregation Viewlet here - the viewlet illustrates a simple aggregation being built and then some Oracle analytic SQL such as AVG(EMP.SAL) OVER (PARTITION BY EMP.DEPTNO) built using both the aggregate component and the expression component. In 11g you used to just write the aggregate expression directly on the target, this made life easy for some cases, but it wan't a very obvious gesture plus had other drawbacks with ordering of transformations (agg before join/lookup. after set and so forth) and supporting analytic SQL for example - there are a lot of postings from creative folks working around this in 11g - anything from customizing KMs, to bypassing aggregation analysis in the ODI code generator. The aggregate component has a few interesting aspects. 1. Firstly and foremost it defines the attributes projected from it - ODI automatically will perform the grouping all you do is define the aggregation expressions for those columns aggregated. In 12c you can control this automatic grouping behavior so that you get the code you desire, so you can indicate that an attribute should not be included in the group by, that's what I did in the analytic SQL example using the aggregate component. 2. The component has a few other properties of interest; it has a HAVING clause and a manual group by clause. The HAVING clause includes a predicate used to filter rows resulting from the GROUP BY clause. Because it acts on the results of the GROUP BY clause, aggregation functions can be used in the HAVING clause predicate, in 11g the filter was overloaded and used for both having clause and filter clause, this is no longer the case. If a filter is after an aggregate, it is after the aggregate (not sometimes after, sometimes having).  3. The manual group by clause let's you use special database grouping grammar if you need to. For example Oracle has a wealth of highly specialized grouping capabilities for data warehousing such as the CUBE function. If you want to use specialized functions like that you can manually define the code here. The example below shows the use of a manual group from an example in the Oracle database data warehousing guide where the SUM aggregate function is used along with the CUBE function in the group by clause. The SQL I am trying to generate looks like the following from the data warehousing guide; SELECT channel_desc, calendar_month_desc, countries.country_iso_code,       TO_CHAR(SUM(amount_sold), '9,999,999,999') SALES$ FROM sales, customers, times, channels, countries WHERE sales.time_id=times.time_id AND sales.cust_id=customers.cust_id AND   sales.channel_id= channels.channel_id  AND customers.country_id = countries.country_id  AND channels.channel_desc IN   ('Direct Sales', 'Internet') AND times.calendar_month_desc IN   ('2000-09', '2000-10') AND countries.country_iso_code IN ('GB', 'US') GROUP BY CUBE(channel_desc, calendar_month_desc, countries.country_iso_code); I can capture the source datastores, the filters and joins using ODI's dataset (or as a traditional flow) which enables us to incrementally design the mapping and the aggregate component for the sum and group by as follows; In the above mapping you can see the joins and filters declared in ODI's dataset, allowing you to capture the relationships of the datastores required in an entity-relationship style just like ODI 11g. The mix of ODI's declarative design and the common flow design provides for a familiar design experience. The example below illustrates flow design (basic arbitrary ordering) - a table load where only the employees who have maximum commission are loaded into a target. The maximum commission is retrieved from the bonus datastore and there is a look using employees as the driving table and only those with maximum commission projected. Hopefully this has given you a taster for some of the new capabilities provided by the aggregate component in ODI 12c. In summary, the actions should be much more consistent in behavior and more easily discoverable for users, the use of the components in a flow graph also supports arbitrary designs and the tool (rather than the interface designer) takes care of the realization using ODI's knowledge modules. Interested to know if a deep dive into each component is interesting for folks. Any thoughts? 

    Read the article

  • Implementing an automatic navigation mesh generation for 2d top down map?

    - by J2V
    I am currently in the middle of implementing an A* pathfinding for enemies. In order to implement the actual A* logic, I need a navigation mesh for my map. I am working on a 2D top down rpg map. The world is static, meaning there is no requirement for dynamic runtime mesh generation. My world objects are pixel based, not tile based and have associated data with them such as scale, rotation, origin etc. I will obviously need some vertex data being generated from my world objects, maybe create a polygon generation from color data? I could create a colormap with objects for my whole map, but I have no idea how to begin creating nav mesh polygons. How would an actual navigation mesh generation look like with this kind of available information? Can anyone maybe point to some great resources? I have looked into some 3D nav mesh tools, but they seem kind of overly complex for my situation and also have a lot of their req data available from models. Thanks a lot in advance! I have been trying to get my head around it for some time now.

    Read the article

  • Unity: parallel vectors and cross product, how to compare vectors

    - by Heisenbug
    I read this post explaining a method to understand if the angle between 2 given vectors and the normal to the plane described by them, is clockwise or anticlockwise: public static AngleDir GetAngleDirection(Vector3 beginDir, Vector3 endDir, Vector3 upDir) { Vector3 cross = Vector3.Cross(beginDir, endDir); float dot = Vector3.Dot(cross, upDir); if (dot > 0.0f) return AngleDir.CLOCK; else if (dot < 0.0f) return AngleDir.ANTICLOCK; return AngleDir.PARALLEL; } After having used it a little bit, I think it's wrong. If I supply the same vector as input (beginDir equal to endDir), the cross product is zero, but the dot product is a little bit more than zero. I think that to fix that I can simply check if the cross product is zero, means that the 2 vectors are parallel, but my code doesn't work. I tried the following solution: Vector3 cross = Vector3.Cross(beginDir, endDir); if (cross == Vector.zero) return AngleDir.PARALLEL; And it doesn't work because comparison between Vector.zero and cross is always different from zero (even if cross is actually [0.0f, 0.0f, 0.0f]). I tried also this: Vector3 cross = Vector3.Cross(beginDir, endDir); if (cross.magnitude == 0.0f) return AngleDir.PARALLEL; it also fails because magnitude is slightly more than zero. So my question is: given 2 Vector3 in Unity, how to compare them? I need the elegant equivalent version of this: if (beginDir.x == endDir.x && beginDir.y == endDir.y && beginDir.z == endDir.z) return true;

    Read the article

  • cannot access ubuntu 12.04 SAMBA share from windows 7 using hostname

    - by user98398
    I've been trying for days to get this working, and everywhere I look online it seems no one has a definitive answer, so here is the run down: I have an external drive attached to my ubuntu 12.04 machine, "nicholas-desktop." I have the entire drive shared over the network via SAMBA. If I try to access the drive from windows 7 by using "\nicholas-desktop" it fails saying it cannot locate "nicholas-desktop." However, if I use the current IP address assigned to my machine by my router's DHCP server by typing "\192.168.2.XXX" I have no problems accessing the share. if I try to ping my ubuntu machine's hostname from windows it fails. The same happens if I try to ping my windows machine, "nicholas-laptop" from my ubuntu machine. Again, if I use either machine's assigned IP address it works fine. Can someone please help me get this working? I don't want any workarounds like setting a static IP, or DHCP reservation, I want to be able to resolve hostnames from both sides. I have tried enabling SAMBA'a WINS server so I could resolve the hostnames using netBIOS, however that didn't work either, I may have made a mistake setting it up though. Thanks for your time, NCB

    Read the article

  • Is there a path from OpenSCAD to WebGL?

    - by Andrew Domaszek
    tl;dr version, Does a software path exist to convert from openscad input to render on a website via webgl? What I'm trying to do is display OpenSCAD created designs uploaded via post on a LEMPy driven website; this is currently done by outputting to a static png image. I would like to render one of openscad's export formats(wikibooks) and convert for display in javascript/webgl through some software path available via linux batch. CPU and RAM are relatively unconstrained but no GPU available. I've looked into CopperLeicht, but it requires CopperCube output which would require windows or wine. FOSS is preferred but not required. edit: I found three.js has converters for fbx, dae, obj, and 3ds formats which might be helpful as one of the last steps.

    Read the article

  • How to make IIS7 stop adding etag to response headers?

    - by user20028
    For performance reasons, I'm using expire headers for static files (adding long expiration periods like 50 years or so). Now I'm trying to get rid of etag headers which are automatically added by IIS7. I've done some searching but it seems harder than what I thought (there doesn't seem to be a straight forward way). I found some workarounds but they all use httpmodules (which I'm keeping as a last resort). I strongly prefer to not get the etag header added in the first place. Did anyone manage to do this? Thanks

    Read the article

< Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >