Search Results

Search found 39984 results on 1600 pages for 'null test'.

Page 308/1600 | < Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >

  • Stunnel delaying boot

    - by Onitlikesonic
    My stunnel implementation works fine when the network is plugged in but it takes an awful amount of time, which delays the whole boot process, when there is no network connected to the machine. As extra information: I'm using "delay=yes" I'm using an fqdn (e.g: stunnel.mydomain.com) for the connections Using ubuntu but this also happened with centos5 previously How can this be avoided or a timeout specified? edit: doing an strace as suggested by symcbean shows the following (including the last part where it hangs): [...] --- SIGCHLD (Child exited) @ 0 (0) --- rt_sigreturn(0x11) = 0 close(3) = 0 wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 6039 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7ff9ce0c79d0) = 6046 wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 6046 --- SIGCHLD (Child exited) @ 0 (0) --- rt_sigreturn(0x11) = 6046 write(1, "[Started: /etc/stunnel/stunnel.c"..., 37) = 37 write(1, "stunnel.\n", 9) = 9 exit_group(0) = ? [...] stunnel hangs in this line: wait4(-1, and when i plug in the network cable it continues to show [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 6046

    Read the article

  • Efficiently checking input and firing events

    - by Jim
    I'm writing an InputHandler class in XNA, and there are several different keys considered valid input (all of type Microsoft.XNA.Framework.Input.Keys). For each key, I have three events: internal event InputEvent XYZPressed; internal event InputEvent XYZHeld; internal event InputEvent XYZReleased; where XYZ is the name of the Keys object representing that key. To fire these events, I have the following for each key: if (Keyboard.GetState().IsKeyDown(XYZ)) { if (PreviousKeyState.IsKeyDown(XYZ)) { if (XYZHeld != null) XYZHeld(); } else { if (XYZPressed != null) XYZPressed(); } } else if (PreviousKeyState.IsKeyDown(XYZ)) { if (XYZReleased != null) XYZReleased(); } However, this is a lot of repeated code (the above needs to be repeated for each input key). Aside from being a hassle to write, if any keys are added to/removed from the keys (if functionality is added/removed), a new section needs to be added (or an existing one removed). Is there a cleaner way to do this? Perhaps something along the lines of foreach key check which state it's in fire this key's event for that state where the code does the foreach (automatically checking exactly those keys that "exist") rather than the coder?

    Read the article

  • How to prevent recursive windows when connecting to vncserver on localhost

    - by blog.adaptivesoftware.biz
    I have a VNCServer (vino) configured on my Ubuntu 8.10 box. I would like to connect to this server from a vncclient running on this same machine (the reason for doing this strange thing is mentioned below). Understandably, when I connect to a vncserver on the same box, my vncclient shows recursive windows. Is there a way I can connect to the vncserver on the same machine and not have the recursive windows problem? Perhaps if I could start the vncserver on one display and the client on another display then will it work? How can I do something like this? Note - Reason for running vnc client and server on the same machine: When I start our Java Swing unit test suite, a bunch of swing UI's are created and destroyed as the tests run. These windows fly in the foreground making it impossible to work while the test suite is running. I am hoping to start the test suite within a vncclient so that I can continue working while the tests run.

    Read the article

  • Enabling Surround sound on a Realtek ALC892 via SPDIF, on Windows 7

    - by Alex
    I have a problem with my ALC892, on an ASRock mainboard (ASRock 890FX Deluxe4). I get only stereo sounds if I use SPDIF connection, in general. My amp shows that is getting surround sound only when I use the Test feature of Windows 7. This test feature allows to know which formats are supported by the audio chip. The tests render correctly both Dolby Digital and DTS. You can find this test under Sounds, Playback Devices, Select Digital Audio, then "Properties". I am using Windows 7 x64, with the latest drivers from the official Realtek website. I also tested other driver versions, both from the Realtek website and from the ASRock one, but had no luck. Thanks for the help. Some specs: CPU: AMD Phenom II X4 965 MOBO: ASRock 890FX Deluxe4 (with onboard Realtek ALC892) Audio amp: Onkyo R-380 (works fine with other sources like PS3 and Xbox 360)

    Read the article

  • Sharing an Apache configuration between testing vs. production

    - by Kevin Reid
    I have a personal web site with a slightly nontrivial Apache configuration. I test changes on my personal machine before uploading them to the server. The path to the files on disk and the root URL of the site are of course different between the test and production conditions, and they occur many places in the configuration (especially <Directory blocks for special locations which have scripts or no directory listing or ...). What is the best way to share the common elements of the configuration, to make sure that my production environment matches my test environment as closely as possible? What I've thought of is to use SetEnv to store the paths for the current machine in environment variables, then Include a common configuration file with ${} everywhere there's something machine specific. Any hazards of this method?

    Read the article

  • droid cam makefile understanding and error

    - by nerorevenge
    I tried installing the droid cam on my fedora 19 (64 bit) . Link to the droid cam application is here and whenever I try to install it , the Makefile which is as follows is invoked obj-m := v4l2loopback-dc.o all: make -C /lib/modules/`uname -r`/build M=`pwd` test: gcc test.c -o test clean: make -C /lib/modules/`uname -r`/build M=`pwd` clean insmod: sudo insmod v4l2loopback-dc.ko width=320 height=240 rmmod: sudo rmmod v4l2loopback-dc.ko and here is the error -- INSTALL: Webcam parameters: '320' and '240' -- INSTALL: Building v4l2loopback-dc.ko make -C /lib/modules/`uname -r`/build M=`pwd` make: *** /lib/modules/3.9.5-301.fc19.x86_64/build: No such file or directory. Stop. make: *** [all] Error 2 -- INSTALL: v4l2loopback-dc.ko not built.. Failure build happens to be a symbolic link.I was wondering what exactly is the makefile trying to and why is it failing?

    Read the article

  • How to deal with static utility classes when designing for testability

    - by Benedikt
    We are trying to design our system to be testable and in most parts developed using TDD. Currently we are trying to solve the following problem: In various places it is necessary for us to use static helper methods like ImageIO and URLEncoder (both standard Java API) and various other libraries that consist mostly of static methods (like the Apache Commons libraries). But it is extremely difficult to test those methods that use such static helper classes. I have several ideas for solving this problem: Use a mock framework that can mock static classes (like PowerMock). This may be the simplest solution but somehow feels like giving up. Create instantiable wrapper classes around all those static utilities so they can be injected into the classes that use them. This sounds like a relatively clean solution but I fear we'll end up creating an awful lot of those wrapper classes. Extract every call to these static helper classes into a function that can be overridden and test a subclass of the class I actually want to test. But I keep thinking that this just has to be a problem that many people have to face when doing TDD - so there must already be solutions for this problem. What is the best strategy to keep classes that use these static helpers testable?

    Read the article

  • Why is TDD not working here?

    - by TobiMcNamobi
    I want to write a class A that has a method calculate(<params>). That method should calculate a value using database data. So I wrote a class Test_A for unit testing (TDD). The database access is done using another class which I have mocked with a class, let's call it Accessor_Mockup. Now, the TDD cycle requires me to add a test that fails and make the simplest changes to A so that the test passes. So I add data to Accessor_Mockup and call A.calculate with appropriate parameters. But why should A use the accessor class at all? It would be simpler (!) if the class just "knows" the values it could retrieve from the database. For every test I write I could introduce such a new value (or an if-branch or whatever). But wait ... TDD is more. There is the refactoring part. But that sounds to me like "OK, I can do this all with a big if-elseif construct. I could refactor it using a new class ... but instead I make use of the DB accessor and do this in a totally different way. The code will not necessarily look better afterwards but I know I WANT to use the database".

    Read the article

  • Programming logic to group a users activities like Facebook

    - by Chris Dowdeswell
    So I am trying to develop an activity feed for my site. Basically If I UNION a bunch of activities into a feed I would end up with something like the following. Chris is now friends with Mark Chris is now friends with Dave What I want though is a neater way of grouping these similar posts so the feed doesn't give information overload... E.g. Chris is now friends with Mark, Dave and 4 Others Any ideas on how I can approach this logically? I am using Classic ASP on SQL server. Here is the UNION statement I have so far: SELECT U.UserID As UserID, L.UN As UN,Left(U.UID,13) As ProfilePic,U.Fname + ' ' + U.Sname As FullName, 'said ' + WP.Post AS Activity, WP.Ctime FROM Users AS U LEFT JOIN Logins L ON L.userID = U.UserID LEFT OUTER JOIN WallPosts AS WP ON WP.userID = U.userID WHERE WP.Ctime IS NOT NULL UNION SELECT U.UserID As UserID, L.UN As UN,Left(U.UID,13) As ProfilePic,U.Fname + ' ' + U.Sname As FullName, 'commented ' + C.Comment AS Activity, C.Ctime FROM Users AS U LEFT JOIN Logins L ON L.userID = U.UserID LEFT OUTER JOIN Comments AS C ON C.UserID = U.userID WHERE C.Ctime IS NOT NULL UNION SELECT U.UserID As UserID, L.UN As UN,Left(U.UID,13) As ProfilePic, U.Fname + ' ' + U.Sname As FullName, 'connected with <a href="/profile.asp?un='+(SELECT Logins.un FROM Logins WHERE Logins.userID = Cn.ToUserID)+'">' + (SELECT Users.Fname + ' ' + Users.Sname FROM Users WHERE userID = Cn.ToUserID) + '</a>' AS Activity, Cn.Ctime FROM Users AS U LEFT JOIN Logins L ON L.userID = U.UserID LEFT OUTER JOIN Connections AS Cn ON Cn.UserID = U.userID WHERE CN.Ctime IS NOT NULL

    Read the article

  • Ask the Readers: How Fast is Your Internet Connection?

    - by Mysticgeek
    The federal government recently announced a broadband initiative that calls for 260 million homes to have 100Mbps Internet connections by the year 2020. This got us wondering, how fast is your current Internet connection? Photo by roland When it comes to the speed of our Internet connection, we all want the maximum possible. The FCC recently announced their National Broadband Plan, which is an initiative to improve the Internet infrastructure in the United States and provide higher speeds to everyone. You’ve also undoubtedly heard the news about Google getting into the mix with their program to bring ultra high-speed fiber broadband to 50,000 users in select cities. While we wait for those programs to come into fruition, we thought it would be cool to check out what kinds of speeds you’re getting now. Test Your Internet Connection Speed There are several sites out there you can use to test your Internet speeds, but probably the best site is Speedtest.net. It’s easy to use, and allows you test download and upload speeds to and from various locations in the US and throughout the world. If you already know the speeds you’re getting leave a comment and let us know. If you use Speedtest.com, just keep in mind that our comment system won’t allow you to copy their result links, but you can simply tell us what you get in the results. We’re especially interested in the results of those of you who have Verizon FIOS or Comcast’s “Ultra” service. Leave a comment and join in the discussion! Similar Articles Productive Geek Tips Configure How often Ubuntu checks for Automatic UpdatesMysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPNorton Internet Security 2010 [Review]Disable Fast User Switching on Windows XPUnderstanding Vista’s New Network Connection Icons TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Converting Mp4 to Mp3 Easily Use Quick Translator to Translate Text in 50 Languages (Firefox) Get Better Windows Search With UltraSearch Scan News With NY Times Article Skimmer SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties

    Read the article

  • Can't login to just installed view administrator

    - by matarvai81
    Hi, we are starting to test View for our purposes. I have created new test enviroment, new vdidemo active directory, new virtual center etc... I just installed view connection server component to new server and trying to do initial configuration, but when trying to log in I get following error " Error accessing the View Administrator. Contact the system administrator" Log file says following error 08:14:08,925 INFO LoginBean User administrator has failed to authenticate to View Administrator What is causing this problem? How can I log in and start to test VDI?

    Read the article

  • PHP remote development workflow: git, symfony and hudson

    - by user2022
    I'm looking to develop a website and all the work will be done remotely (no local dev server). The reason for this is that my shared hosting company a2hosting has a specific configuration (symfony,mysql,git) that I don't want to spend time duplicating when I can just ssh and develop remotely or through netbeans remote editing features. My question is how can I use git to separate my site into three areas: live, staging and dev. Here's my initial thought: public_html (live site and git repo) testing: a mirror of the site used for visual tests (full git repo) dev/ticket# : git branches of public_html used for features and bug fixes (full git repo) Version Control with git: Initial setup: cd public_html git init git add * git commit -m ‘initial commit of the site’ cd .. git clone public_html testing mkdir dev Development: cd /dev git clone ../testing ticket# all work is done in ./dev/ticket#, then visit www.domain.com/dev/ticket# to visually test make granular commits as necessary until dev is done git push origin master:ticket# if the above fails: merge latest testing state into current dev work: git merge origin/master then try the push again mark ticket# as ready for integration integration and deployment process: cd ../../testing git merge ticket# -m "integration test for ticket# --no-ff (check for conflicts ) run hudson tests visit www.domain.com/testing for visual test if all tests pass: if this ticket marks the end of a big dev sprint: make a snapshot with git tag git push --tags origin else git push origin cd ../public_html git checkout -f (live site should have the latest dev from ticket#) else: revert the merge: git checkout master~1; git commit -m "reverting ticket#" update ticket# that testing failed with the failure details Snapshots: Each major deployment sprint should have a standard name and should be tracked. Method: git tag Naming convention: TBD Reverting site to previous state If something goes wrong, then revert to previous snapshot and debug the issue in dev with a new ticket#. Once the bug is fixed, follow the deployment process again. My questions: Does this workflow make sense, if not, any recommendations Is my approach for reverting correct or is there a better way to say 'revert to before x commit'

    Read the article

  • Problem with running php script using mysql on tomcat

    - by Peter
    I am using tomcat 6 with JavaBridge. I have stored my php script in the following location. C:\Program Files\apache-tomcat-6.0.26\webapps\JavaBridge\project\test.php In test.php I am using curl and mysql. The php.ini in JavaBridge is stored in the following location C:\Program Files\apache-tomcat-6.0.26\webapps\JavaBridge\WEB-INF\cgi\php.ini and its contents are - extension_dir="C:\Program Files\apache-tomcat-6.0.26\webapps\JavaBridge\WEB-INF\cgi\x86-windows\ext" include_path="C:\Program Files\apache-tomcat-6.0.26\webapps\JavaBridge\WEB-INF\pear;." there is also a config file called mysql.ini whose contents are - extension = php_mysql.dll I had also installed wamp earlier so I copied all the dll's from C:\wamp\bin\php\php5.3.0\ext to C:\Program Files\apache-tomcat-6.0.26\webapps\JavaBridge\WEB-INF\cgi\x86-windows\ext When I start tomcat and run my script I get the following error - Fatal error: Call to undefined function mysqli_connect() in C:\Program Files\apache-tomcat-6.0.26\webapps\JavaBridge\project\test.php on line 534 Please help.

    Read the article

  • How to detect hard disk failure?

    - by Devator
    So, one of my servers has a hard disk failure. It's running software RAID, the system locked up and according to /proc/mdstat (and /var/log/messages), it's really down: Personalities : [raid1] md2 : active raid1 sdb2[1] 104320 blocks [2/1] [_U] md5 : active raid1 sdb5[1] 2104448 blocks [2/1] [_U] md6 : active raid1 sdb6[1] 830134656 blocks [2/1] [_U] md1 : active raid1 sdb1[1] 143363968 blocks [2/1] [_U] and Nov 5 22:04:37 m38501 smartd[4467]: Device: /dev/sda, not capable of SMART self-check However when I do smartctl -H /dev/sda, it passes the test. It also passes the test with smartctl --test=short /dev/sda. So, is smartctl a broken testing tool, or am I doing something completely off?

    Read the article

  • What is the kd tree intersection logic?

    - by bobobobo
    I'm trying to figure out how to implement a KD tree. On page 322 of "Real time collision detection" by Ericson The text section is included below in case Google book preview doesn't let you see it the time you click the link text section Relevant section: The basic idea behind intersecting a ray or directed line segment with a k-d tree is straightforward. The line is intersected against the node's splitting plane, and the t value of intersection is computed. If t is within the interval of the line, 0 <= t <= tmax, the line straddles the plane and both children of the tree are recursively descended. If not, only the side containing the segment origin is recursively visited. So here's what I have: (open image in new tab if you can't see the lettering) The logical tree Here the orange ray is going thru the 3d scene. The x's represent intersection with a plane. From the LEFT, the ray hits: The front face of the scene's enclosing cube, The (1) splitting plane The (2.2) splitting plane The right side of the scene's enclosing cube But here's what would happen, naively following Ericson's basic description above: Test against splitting plane (1). Ray hits splitting plane (1), so left and right children of splitting plane (1) are included in next test. Test against splitting plane (2.1). Ray actually hits that plane, (way off to the right) so both children are included in next level of tests. (This is counter-intuitive - shouldn't only the bottom node be included in subsequent tests) Can some one describe what happens when the orange ray goes through the scene correctly?

    Read the article

  • Converting to and from local and world 3D coordinate spaces?

    - by James Bedford
    Hey guys, I've been following a guide I found here (http://knol.google.com/k/matrices-for-3d-applications-view-transformation) on constructing a matrix that will allow me to 3D coordinates to an object's local coordinate space, and back again. I've tried to implement these two matrices using my object's look, side, up and location vectors and it seems to be working for the first three coordinates. I'm a little confused as to what I should expect for the w coordinate. Here are couple of examples from the print outs I've made of the matricies that are constructed. I'm passing a test vector of [9, 8, 14, 1] each time to see if I can convert both ways: Basic example: localize matrix: Matrix: 0.000000 -0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 5.237297 -45.530716 11.021271 1.000000 globalize matrix: Matrix: 0.000000 0.000000 1.000000 0.000000 -0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 -11.021271 -45.530716 -5.237297 1.000000 test: Vector4f(9.000000, 8.000000, 14.000000, 1.000000) localTest: Vector4f(14.000000, 8.000000, 9.000000, -161.812256) worldTest: Vector4f(9.000000, 8.000000, 14.000000, -727.491455) More complicated example: localize matrix: Matrix: 0.052504 -0.000689 -0.998258 0.000000 0.052431 0.998260 0.002068 0.000000 0.997241 -0.052486 0.052486 0.000000 58.806095 2.979346 -39.396252 1.000000 globalize matrix: Matrix: 0.052504 0.052431 0.997241 0.000000 -0.000689 0.998260 -0.052486 0.000000 -0.998258 0.002068 0.052486 0.000000 -42.413120 5.975957 -56.419727 1.000000 test: Vector4f(9.000000, 8.000000, 14.000000, 1.000000) localTest: Vector4f(-13.508600, 8.486917, 9.290090, 2.542114) worldTest: Vector4f(9.000190, 7.993863, 13.990230, 102.057129) As you can see in the more complicated example, the coordinates after converting both ways loose some precision, but this isn't a problem. I'm just wondering how I should deal with the last (w) coordinate? Should I just set it to 1 after performing the matrix multiplication, or does it look like I've done something wrong? Thanks in advance for your help!

    Read the article

  • Does TDD really work for complex projects?

    - by Amir Rezaei
    I’m asking this question regarding problems I have experienced during TDD projects. I have noticed the following challenges when creating unit tests. Generating and maintaining mock data It’s hard and unrealistic to maintain large mock data. It’s is even harder when database structure undergoes changes. Testing GUI Even with MVVM and ability to test GUI, it takes a lot of code to reproduce the GUI scenario. Testing the business I have experience that TDD works well if you limit it to simple business logic. However complex business logic is hard to test since the number of combinations of tests (test space) is very large. Contradiction in requirements In reality it’s hard to capture all requirements under analysis and design. Many times one note requirements lead to contradiction because the project is complex. The contradiction is found late under implementation phase. TDD requires that requirements are 100% correct. In such cases one could expect that conflicting requirements would be captured during creating of tests. But the problem is that this isn’t the case in complex scenarios. I have read this question: Why does TDD work? Does TDD really work for complex enterprise projects, or is it practically limit to project type?

    Read the article

  • GnoMenu integration in docky.

    - by Lyon Apostol
    Posting the error: Running docky --debug < ...some other stuff... > [Info 10:27:52.535] [Helper] Starting GnoMenu.py [Info 10:27:52.560] [HelperService] Helper added: /usr/share/dockmanager/scripts/GnoMenu.py [Info 10:27:52.757] [Helper] GnoMenu.py :: No module named dockmanager.dockmanager [Info 10:27:52.758] [Helper] GnoMenu.py :: gconf backend [Info 10:27:52.758] [Helper] GnoMenu.py :: 239 [Error 10:27:52.759] [Helper] GnoMenu.py :: Traceback (most recent call last): [Error 10:27:52.759] [Helper] GnoMenu.py :: File "/usr/share/dockmanager/scripts/GnoMenu.py", line 120, in < module > [Error 10:27:52.759] [Helper] GnoMenu.py :: class DockyGnoMenuItem(DockManagerItem): [Error 10:27:52.759] [Helper] GnoMenu.py :: NameError: name 'DockManagerItem' is not defined [Info 10:27:52.874] [Helper] GnoMenu.py has exited (Code 1). After installing dockmanager Now this is what happens: [Info 11:34:36.166] [Helper] GnoMenu.py :: gconf backend [Info 11:34:36.166] [Helper] GnoMenu.py :: 136 [Info 11:34:36.166] [Helper] GnoMenu.py :: DockManagerSink(): System.NotSupportedException: Cannot send null variant [Info 11:34:36.210] [Helper] GnoMenu.py has exited (Code 0). I tried to click the icon at the dock: gconf backend 380 DockManagerSink(): System.NotSupportedException: Cannot send null variant gconf backend 380 DockManagerSink(): System.NotSupportedException: Cannot send null variant How can I fix this? Thanks for your prompt answer!

    Read the article

  • Can unit tests verify software requirements?

    - by Peter Smith
    I have often heard unit tests help programmers build confidence in their software. But is it enough for verifying that software requirements are met? I am losing confidence that software is working just because the unit tests pass. We have experienced some failures in production deployment due to an untested\unverified execution path. These failures are sometimes quite large, impact business operations and often requires an immediate fix. The failure is very rarely traced back to a failing unit test. We have large unit test bodies that have reasonable line coverage but almost all of these focus on individual classes and not on their interactions. Manual testing seems to be ineffective because the software being worked on is typically large with many execution paths and many integration points with other software. It is very painful to manually test all of the functionality and it never seems to flush out all the bugs. Are we doing unit testing wrong when it seems we still are failing to verify the software correctly before deployment? Or do most shops have another layer of automated testing in addition to unit tests?

    Read the article

  • Designing javascript chart library

    - by coolscitist
    I started coding a chart library on top of d3js: My chart library. I read Javascript API reusability and Towards reusable charts. However, I am NOT really following the suggestions because I am not really convinced about them. This is how my library can be used to create a bubble chart: var chart = new XYBubbleChart(); chart.data = [{"xValue":200,"yValue":300},{"xValue":400,"yValue":200},{"xValue":100,"yValue":310}]; //set data chart.dataKey.x = "xValue"; chart.dataKey.y = "yValue"; chart.elementId = "#chart"; chart.createChart(); Here are my questions: It does not use chaining. Is it a big issue? Every property and function is exposed publicly. (Example: width, height are exposed in Chart.js). OOP is all about abstraction and hiding, but I don't really see the point right now. I think exposing everything gives flexibility to change property and functionality inside subclasses and objects without writing a lot of code. What could be pitfalls of such exposure? I have implemented functions like: zooming, "showing info boxes when data point is clicked" as "abilities". (example: XYZoomingAbility.js). Basically, such "abilities" accept "chart" object, play around with public variables of "chart" to add functionality. What this allows me to do is to add an ability by writing: activateZoomAbility(chartObject); My goal is to separate "visualization" from "interactivity". I want "interactivity" like: zooming to be plugged into the chart rather than built inside the chart. Like, I don't want my bubble chart to know anything about "zooming". However, I do want zoomable bubble chart. What is the best way to do this? How to test and what to test? I have written mixed tests: jasmine and actual html files so that I can test manually on browser.

    Read the article

  • Running CRON job on Ubuntu server for SugarCRM

    - by Logik
    I am pretty inexperienced in Linux, so be descriptive on your answer. My environment: Local Linux server 12.04 hosting Sugar CRM 6.5.2. There is area in sugar CRM called scheduler. I can configured some predefined jobs here. in my case i am trying to run email reminders (ever min/hour/day/month). For this scheduler to be effective, i read some where i need to setup CRON job. So I did some research & finally put following lines in CRONTAB for the root user, as per instructions given in sugarCRM. * * * * * cd /var/www/crm; php -f cron.php > /dev/null 2>&1 Well I am creating contracts in my sugarCRM (AOS module) & I want email reminders to be sent for these contracts to the concern person. Now my sugarCRM email is configured correctly & I can send test emails using it. But the CRON + scheduler not giving any result. I can't receive any emails. Then I tried to read /var/log/syslog & it is showing entry for following line each minute. Oct 27 15:03:01 unicomm CRON[28182]: (root) CMD (cd /var/www/crm; php -f cron.php /dev/null 2&1) I've few questions: what does the CRON job line i've added in crontab mean? cd /var/www/crm; php -f cron.php > /dev/null 2>&1 is not making any sense to me. How am i suppose to get this thing work? I've searched a lot (including SugarCRM forum), but no luck.

    Read the article

  • Bridging the gap between developers and testers with VS 2010

    - by Etienne Tremblay
    Hey everyone, I know it’s been an eternity since I blogged but I have so much to do that I unfortunately need to prioritize.  Vincent Grondin and I did a 7h presentation on the new developer and tester tools available in the VS 2010 suite.  It was a blast.  We did it in front of an audience (around 120) and it was taped.  We did it as a play and really didn’t look at the crowd at all we were training each other on the technology. It is now available for anyone that would like to watch it at this location: http://www.devteach.com/ALM-TFS2010-Bridgingthegap.aspx What we covered in the full day event was Migration to TFS 2010 (10h00) 1-Migration of VSS to TFS (20 min.) 2-Automating the Build (Something you can't do with VSS) ( 20 Min.) 3-User story (Real application context for this presentation) (20 min.) 10h00 Pause Manuel Tests by Dev ( 11h30) 4-Adding a tester to the team (Into to MTM) (20 min.) 5-Define tests (what is a white bug) (20 min.) 6-Fix the bug and show Intellitrace and Play back the test (20 min.) 12h15 Lunch Manuel testing for maintenance (13h30) 7- Implement new Feature (web service) and Identify bug with MTM and branch for a production fix and also add a new Build script (20 min.) 8- Fix bug in production branch, Playback tests, merge the change in main branch (20 min.) Manuel testing with the lab manager (14h30) 9- Intro to Lab manager and environment (20 min.) 10- Change build script to deploy to lab and test with web service in lab environment. (20 min.) 15h15 Pause Automate UI test with CodeUI (15h30) 11- Reducing the effort of testing the UI (20 min.) 12- Repeating testing to make sure the application is working properly (20 min.) 13- Automate Coded UI with the Lab environment (20 min.) 16h30 Conclusions As you can see lots of stuff!! Enjoy the show and let us know how you like it Cheers, ET Technorati Tags: VS 2010,Testing Tools,ALM,Training

    Read the article

  • Where'd My Data Go? (and/or...How Do I Get Rid of It?)

    - by David Paquette
    Want to get a better idea of how cascade deletes work in Entity Framework Code First scenarios? Want to see it in action? Stick with us as we quickly demystify what happens when you tell your data context to nuke a parent entity. This post is authored by Calgary .NET User Group Leader David Paquette with help from Microsoft MVP in Asp.Net James Chambers. We got to spend a great week back in March at Prairie Dev Con West, chalk full of sessions, presentations, workshops, conversations and, of course, questions.  One of the questions that came up during my session: "How does Entity Framework Code First deal with cascading deletes?". James and I had different thoughts on what the default was, if it was different from SQL server, if it was the same as EF proper and if there was a way to override whatever the default was.  So we built a set of examples and figured out that the answer is simple: it depends.  (Download Samples) Consider the example of a hockey league. You have several different entities in the league including games, teams that play the games and players that make up the teams. Each team also has a mascot.  If you delete a team, we need a couple of things to happen: The team, games and mascot will be deleted, and The players for that team will remain in the league (and therefore the database) but they should no longer be assigned to a team. So, let's make this start to come together with a look at the default behaviour in SQL when using an EDMX-driven project. The Reference – Understanding EF's Behaviour with an EDMX/DB First Approach First up let’s take a look at the DB first approach.  In the database, we defined 4 tables: Teams, Players, Mascots, and Games.  We also defined 4 foreign keys as follows: Players.Team_Id (NULL) –> Teams.Id Mascots.Id (NOT NULL) –> Teams.Id (ON DELETE CASCADE) Games.HomeTeam_Id (NOT NULL) –> Teams.Id Games.AwayTeam_Id (NOT NULL) –> Teams.Id Note that by specifying ON DELETE CASCADE for the Mascots –> Teams foreign key, the database will automatically delete the team’s mascot when the team is deleted.  While we want the same behaviour for the Games –> Teams foreign keys, it is not possible to accomplish this using ON DELETE CASCADE in SQL Server.  Specifying a ON DELETE CASCADE on these foreign keys would cause a circular reference error: The series of cascading referential actions triggered by a single DELETE or UPDATE must form a tree that contains no circular references. No table can appear more than one time in the list of all cascading referential actions that result from the DELETE or UPDATE – MSDN When we create an entity data model from the above database, we get the following:   In order to get the Games to be deleted when the Team is deleted, we need to specify End1 OnDelete action of Cascade for the HomeGames and AwayGames associations.   Now, we have an Entity Data Model that accomplishes what we set out to do.  One caveat here is that Entity Framework will only properly handle the cascading delete when the the players and games for the team have been loaded into memory.  For a more detailed look at Cascade Delete in EF Database First, take a look at this blog post by Alex James.   Building The Same Sample with EF Code First Next, we're going to build up the model with the code first approach.  EF Code First is defined on the Ado.Net team blog as such: Code First allows you to define your model using C# or VB.Net classes, optionally additional configuration can be performed using attributes on your classes and properties or by using a Fluent API. Your model can be used to generate a database schema or to map to an existing database. Entity Framework Code First follows some conventions to determine when to cascade delete on a relationship.  More details can be found on MSDN: If a foreign key on the dependent entity is not nullable, then Code First sets cascade delete on the relationship. If a foreign key on the dependent entity is nullable, Code First does not set cascade delete on the relationship, and when the principal is deleted the foreign key will be set to null. The multiplicity and cascade delete behavior detected by convention can be overridden by using the fluent API. For more information, see Configuring Relationships with Fluent API (Code First). Our DbContext consists of 4 DbSets: public DbSet<Team> Teams { get; set; } public DbSet<Player> Players { get; set; } public DbSet<Mascot> Mascots { get; set; } public DbSet<Game> Games { get; set; } When we set the Mascot –> Team relationship to required, Entity Framework will automatically delete the Mascot when the Team is deleted.  This can be done either using the [Required] data annotation attribute, or by overriding the OnModelCreating method of your DbContext and using the fluent API. Data Annotations: public class Mascot { public int Id { get; set; } public string Name { get; set; } [Required] public virtual Team Team { get; set; } } Fluent API: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Mascot>().HasRequired(m => m.Team); } The Player –> Team relationship is automatically handled by the Code First conventions. When a Team is deleted, the Team property for all the players on that team will be set to null.  No additional configuration is required, however all the Player entities must be loaded into memory for the cascading to work properly. The Game –> Team relationship causes some grief in our Code First example.  If we try setting the HomeTeam and AwayTeam relationships to required, Entity Framework will attempt to set On Cascade Delete for the HomeTeam and AwayTeam foreign keys when creating the database tables.  As we saw in the database first example, this causes a circular reference error and throws the following SqlException: Introducing FOREIGN KEY constraint 'FK_Games_Teams_AwayTeam_Id' on table 'Games' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Could not create constraint. To solve this problem, we need to disable the default cascade delete behaviour using the fluent API: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Mascot>().HasRequired(m => m.Team); modelBuilder.Entity<Team>() .HasMany(t => t.HomeGames) .WithRequired(g => g.HomeTeam) .WillCascadeOnDelete(false); modelBuilder.Entity<Team>() .HasMany(t => t.AwayGames) .WithRequired(g => g.AwayTeam) .WillCascadeOnDelete(false); base.OnModelCreating(modelBuilder); } Unfortunately, this means we need to manually manage the cascade delete behaviour.  When a Team is deleted, we need to manually delete all the home and away Games for that Team. foreach (Game awayGame in jets.AwayGames.ToArray()) { entities.Games.Remove(awayGame); } foreach (Game homeGame in homeGames) { entities.Games.Remove(homeGame); } entities.Teams.Remove(jets); entities.SaveChanges();   Overriding the Defaults – When and How To As you have seen, the default behaviour of Entity Framework Code First can be overridden using the fluent API.  This can be done by overriding the OnModelCreating method of your DbContext, or by creating separate model override files for each entity.  More information is available on MSDN.   Going Further These were simple examples but they helped us illustrate a couple of points. First of all, we were able to demonstrate the default behaviour of Entity Framework when dealing with cascading deletes, specifically how entity relationships affect the outcome. Secondly, we showed you how to modify the code and control the behaviour to get the outcome you're looking for. Finally, we showed you how easy it is to explore this kind of thing, and we're hoping that you get a chance to experiment even further. For example, did you know that: Entity Framework Code First also works seamlessly with SQL Azure (MSDN) Database creation defaults can be overridden using a variety of IDatabaseInitializers  (Understanding Database Initializers) You can use Code Based migrations to manage database upgrades as your model continues to evolve (MSDN) Next Steps There's no time like the present to start the learning, so here's what you need to do: Get up-to-date in Visual Studio 2010 (VS2010 | SP1) or Visual Studio 2012 (VS2012) Build yourself a project to try these concepts out (or download the sample project) Get into the community and ask questions! There are a ton of great resources out there and community members willing to help you out (like these two guys!). Good luck! About the Authors David Paquette works as a lead developer at P2 Energy Solutions in Calgary, Alberta where he builds commercial software products for the energy industry.  Outside of work, David enjoys outdoor camping, fishing, and skiing. David is also active in the software community giving presentations both locally and at conferences. David also serves as the President of Calgary .Net User Group. James Chambers crafts software awesomeness with an incredible team at LogiSense Corp, based in Cambridge, Ontario. A husband, father and humanitarian, he is currently residing in the province of Manitoba where he resists the urge to cheer for the Jets and maintains he allegiance to the Calgary Flames. When he's not active with the family, outdoors or volunteering, you can find James speaking at conferences and user groups across the country about web development and related technologies.

    Read the article

  • How to get tilemap transparency color working with TiledLib's Demo implementation?

    - by Adam LaBranche
    So the problem I'm having is that when using Nick Gravelyn's tiledlib pipeline for reading and drawing tmx maps in XNA, the transparency color I set in Tiled's editor will work in the editor, but when I draw it the color that's supposed to become transparent still draws. The closest things to a solution that I've found are - 1) Change my sprite batch's BlendState to NonPremultiplied (found this in a buried Tweet). 2) Get the pixels that are supposed to be transparent at some point then Set them all to transparent. Solution 1 didn't work for me, and solution 2 seems hacky and not a very good way to approach this particular problem, especially since it looks like the custom pipeline processor reads in the transparent color and sets it to the color key for transparency according to the code, just something is going wrong somewhere. At least that's what it looks like the code is doing. TileSetContent.cs if (imageNode.Attributes["trans"] != null) { string color = imageNode.Attributes["trans"].Value; string r = color.Substring(0, 2); string g = color.Substring(2, 2); string b = color.Substring(4, 2); this.ColorKey = new Color((byte)Convert.ToInt32(r, 16), (byte)Convert.ToInt32(g, 16), (byte)Convert.ToInt32(b, 16)); } ... TiledHelpers.cs // build the asset as an external reference OpaqueDataDictionary data = new OpaqueDataDictionary(); data.Add("GenerateMipMaps", false); data.Add("ResizetoPowerOfTwo", false); data.Add("TextureFormat", TextureProcessorOutputFormat.Color); data.Add("ColorKeyEnabled", tileSet.ColorKey.HasValue); data.Add("ColorKeyColor", tileSet.ColorKey.HasValue ? tileSet.ColorKey.Value : Microsoft.Xna.Framework.Color.Magenta); tileSet.Texture = context.BuildAsset<Texture2DContent, Texture2DContent>( new ExternalReference<Texture2DContent>(path), null, data, null, asset); ... I can share more code as well if it helps to understand my problem. Thank you.

    Read the article

  • Unit testing statically typed functional code

    - by back2dos
    I wanted to ask you people, in which cases it makes sense to unit test statically typed functional code, as written in haskell, scala, ocaml, nemerle, f# or haXe (the last is what I am really interested in, but I wanted to tap into the knowledge of the bigger communities). I ask this because from my understanding: One aspect of unit tests is to have the specs in runnable form. However when employing a declarative style, that directly maps the formalized specs to language semantics, is it even actually possible to express the specs in runnable form in a separate way, that adds value? The more obvious aspect of unit tests is to track down errors that cannot be revealed through static analysis. Given that type safe functional code is a good tool to code extremely close to what your static analyzer understands. However a simple mistake like using x instead of y (both being coordinates) in your code cannot be covered. However such a mistake could also arise while writing the test code, so I am not sure whether its worth the effort. Unit tests do introduce redundancy, which means that when requirements change, the code implementing them and the tests covering this code must both be changed. This overhead of course is about constant, so one could argue, that it doesn't really matter. In fact, in languages like Ruby it really doesn't compared to the benefits, but given how statically typed functional programming covers a lot of the ground unit tests are intended for, it feels like it's a constant overhead one can simply reduce without penalty. From this I'd deduce that unit tests are somewhat obsolete in this programming style. Of course such a claim can only lead to religious wars, so let me boil this down to a simple question: When you use such a programming style, to which extents do you use unit tests and why (what quality is it you hope to gain for your code)? Or the other way round: do you have criteria by which you can qualify a unit of statically typed functional code as covered by the static analyzer and hence needs no unit test coverage?

    Read the article

< Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >