Search Results

Search found 36081 results on 1444 pages for 'object expected'.

Page 490/1444 | < Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >

  • Need a Holistic view of your Concurrent Processing?

    - by cwarticki
    Need a Holistic view of your Concurrent Processing? Choose CP AnalyzerGo to Doc 1411723.1 for more details and script download. The Concurrent Processing Analyzer is a Self-Service Health-Check script which reviews the overall Concurrent Processing Footprint, analyzes the current configurations and settings for the environment providing feedback and recommendations on Best Practices. This is a non-invasive script which provides recommended actions to be performed on the instance it was run on.  For production instances, always apply any changes to a recent clone to ensure an expected outcome. E-Business Applications Concurrent Processing Analyzer Overview E-Business Applications Concurrent Request Analysis E-Business Applications Concurrent Manager Analysis Identifies Concurrent System Setup and configurations Identifies and recommends Concurrent Best Practices Easy to add Tool for regular Concurrent Maintenance Execute Analysis anytime to compare trending from past outputs Feedback welcome!

    Read the article

  • Delta-update Firefox Aurora package from PPA

    - by ignite
    I am using Firefox Aurora in my Ubuntu 12.04 which I have installed via its ppa (ppa:ubuntu-mozilla-daily/firefox-aurora). As expected of Aurora, I get an update usually after 2-3 days. I have Firefox Aurora installed in Windows too. There also I get updates in 2-3 days but size of update is usually 4-5 MB, while in Ubuntu it's always around 20 MB. What is the reason for this difference? Is there any way by which I can download and install only the changes and not the entire Aurora again and again?

    Read the article

  • Error in mounting HDD

    - by Vikramjeet
    I am getting the following error whenever I mount my external HDD. It was working before and then I opted for safely removing the drive. Now its giving me following error Error mounting: mount exited with exit code 13: ntfs_mst_post_read_fixup_warn: magic: 0x43425355 size: 4096 usa_ofs: 8850 usa_count: 65535: Invalid argument Actual VCN (0x800006009000000) of index buffer is different from expected VCN (0x0). Failed to mount '/dev/sdb1': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details.

    Read the article

  • How can I fade something to clear instead of white?

    - by Raven Dreamer
    I've got an XNA game which essentially has "floating combat text": short-lived messages that display for a fraction of a second and then disappear. I've recently added a gradual "fade-away" effect, like so: public void Update() { color.A -= 10; position.X += 3; if (color.A <= 10) isDead = true; } Where color is the Color int the message displays as. This works as expected, however, it fades the messages to white, which is very noticeable on my indigo background. Is there some way to fade it to transparent, rather than white? Lerp-ing towards the background color isn't an option, as there's a possibility there will be something between the text and the background, which would simply be the inverse of the current problem.

    Read the article

  • Playing with http page cycle using JustMock

    - by mehfuzh
    In this post , I will cover a test code that will mock the various elements needed to complete a HTTP page request and  assert the expected page cycle steps. To begin, i have a simple enumeration that has my predefined page steps: public enum PageStep {     PreInit,     Load,     PreRender,     UnLoad } Once doing so, i  first created the page object [not mocking]. Page page = new Page(); Here, our target is to fire up the page process through ProcessRequest call, now if we take a look inside the method with reflector.net,  the call trace will go like : ProcessRequest –> ProcessRequestWithNoAssert –> SetInstrinsics –> Finallly ProcessRequest. Inside SetInstrinsics ,  it requires calls from HttpRequest, HttpResponse and HttpBrowserCababilities. With this clue at hand, we can easily know the classes / calls  we need to mock in order to get through the expected call. Accordingly, for  HttpBrowserCapabilities our required test code will look like: Mock.Arrange(() => browser.PreferredRenderingMime).Returns("text/html"); Mock.Arrange(() => browser.PreferredResponseEncoding).Returns("UTF-8"); Mock.Arrange(() => browser.PreferredRequestEncoding).Returns("UTF-8"); Now, HttpBrowserCapabilities is get though [Instance]HttpRequest.Browser. Therefore, we create the HttpRequest mock: var request = Mock.Create<HttpRequest>(); Then , add the required get call : Mock.Arrange(() => request.Browser).Returns(browser); As, [instance]Browser.PerferrredResponseEncoding and [instance]Browser.PreferredResponseEncoding  are also set to the request object and to make that they are set properly, we can add the following lines as well [not required though]. bool requestContentEncodingSet = false; Mock.ArrangeSet(() => request.ContentEncoding = Encoding.GetEncoding("UTF-8")).DoInstead(() =>  requestContentEncodingSet = true); Similarly, for response we can write:  var response = Mock.Create<HttpResponse>();    bool responseContentEncodingSet = false;  Mock.ArrangeSet(() => response.ContentEncoding = Encoding.GetEncoding("UTF-8")).DoInstead(() => responseContentEncodingSet = true); Finally , I created a mock of HttpContext and set the Request and Response properties that will returns the mocked version. var context = Mock.Create<HttpContext>();   Mock.Arrange(() => context.Request).Returns(request); Mock.Arrange(() => context.Response).Returns(response); As, Page internally calls RenderControl method , we just need to replace that with our one and optionally we can check if  invoked properly: bool rendered = false; Mock.Arrange(() => page.RenderControl(Arg.Any<HtmlTextWriter>())).DoInstead(() => rendered = true); That’s  it, the rest of the code is simple,  where  i asserted the page cycle with the PageSteps that i defined earlier: var pageSteps = new Queue<PageStep>();   page.PreInit +=      delegate      {          pageSteps.Enqueue(PageStep.PreInit);      }; page.Load +=      delegate      {          pageSteps.Enqueue(PageStep.Load);      };   page.PreRender +=      delegate      {          pageSteps.Enqueue(PageStep.PreRender);      };   page.Unload +=      delegate      {          pageSteps.Enqueue(PageStep.UnLoad);      };   page.ProcessRequest(context);    Assert.True(requestContentEncodingSet);  Assert.True(responseContentEncodingSet);  Assert.True(rendered);    Assert.Equal(pageSteps.Dequeue(), PageStep.PreInit);  Assert.Equal(pageSteps.Dequeue(), PageStep.Load);  Assert.Equal(pageSteps.Dequeue(), PageStep.PreRender);  Assert.Equal(pageSteps.Dequeue(), PageStep.UnLoad);    Mock.Assert(request);  Mock.Assert(response);   You can get the test class shown in this post here to give a try by yourself with of course JustMock :-).   Enjoy!!

    Read the article

  • Auto-Configuring SSIS Packages

    - by Davide Mauri
    SSIS Package Configurations are very useful to make packages flexible so that you can change objects properties at run-time and thus make the package configurable without having to open and edit it. In a complex scenario where you have dozen of packages (even in in the smallest BI project I worked on I had 50 packages), each package may have its own configuration needs. This means that each time you have to run the package you have to pass the correct Package Configuration. I usually use XML configuration files and I also force everyone that works with me to make sure that an object that is used in several packages has the same name in all package where it is used, in order to simplify configurations usage. Connection Managers are a good example of one of those objects. For example, all the packages that needs to access to the Data Warehouse database must have a Connection Manager named DWH. Basically we define a set of “global” objects so that we can have a configuration file for them, so that it can be used by all packages. If a package as some specific configuration needs, we create a specific – or “local” – XML configuration file or we set the value that needs to be configured at runtime using DTLoggedExec’s Package Parameters: http://dtloggedexec.davidemauri.it/Package%20Parameters.ashx Now, how we can improve this even more? I’d like to have a package that, when it’s run, automatically goes “somewhere” and search for global or local configuration, loads it and applies it to itself. That’s the basic idea of Auto-Configuring Packages. The “somewhere” is a SQL Server table, defined in this way In this table you’ll put the values that you want to be used at runtime by your package: The ConfigurationFilter column specify to which package that configuration line has to be applied. A package will use that line only if the value specified in the ConfigurationFilter column is equal to its name. In the above sample. only the package named “simple-package” will use the line number two. There is an exception here: the $$Global value indicate a configuration row that has to be applied to any package. With this simple behavior it’s possible to replicate the “global” and the “local” configuration approach I’ve described before. The ConfigurationValue contains the value you want to be applied at runtime and the PackagePath contains the object to which that value will be applied. The ConfiguredValueType column defined the data type of the value and the Checksum column is contains a calculated value that is simply the hash value of ConfigurationFilter plus PackagePath so that it can be used as a Primary Key to guarantee uniqueness of configuration rows. As you may have noticed the table is very similar to the table originally used by SSIS in order to put DTS Configuration into SQL Server tables: SQL Server SSIS Configuration Type: http://msdn.microsoft.com/en-us/library/ms141682.aspx Now, how it works? It’s very easy: you just have to call DTLoggedExec with the /AC option: DTLoggedExec.exe /FILE:”mypackage.dtsx” /AC:"localhost;ssis_auto_configuration;ssiscfg.configuration" the AC option expects a string with the following format: <database_server>;<database_name>;<table_name>; only Windows Authentication is supported. When DTLoggedExec finds an Auto-Configuration request, it injects a new connection manager in the loaded package. The injected connection manager is named $$DTLoggedExec_AutoConfigure and is used by the two SQL Server DTS Configuration ($$DTLoggedExec_Global and $$DTLoggedExec_Local) also injected by DTLoggedExec, used to load “local” and “global” configuration. Now, you may start to wonder why this approach cannot be used without having all this stuff going around, but just passing to a package always two XML DTS Configuration files, (to have to “local” and the “global” configurations) doing something like this: DTLoggedExec.exe /FILE:”mypackage.dtsx” /CONF:”global.dtsConfig” /CONF:”mypackage.dtsConfig” The problem is that this approach doesn’t work if you have, in one of the two configuration file, a value that has to be applied to an object that doesn’t exists in the loaded package. This situation will raise an error that will halt package execution. To solve this problem, you may want to create a configuration file for each package. Unfortunately this will make deployment and management harder, since you’ll have to deal with a great number of configuration files. The Auto-Configuration approach solve all these problems at once! We’re using it in a project where we have hundreds of packages and I can tell you that deployment of packages and their configuration for the pre-production and production environment has never been so easy! To use the Auto-Configuration option you have to download the latest DTLoggedExec release: http://dtloggedexec.codeplex.com/releases/view/62218 Feedback, as usual, are very welcome!

    Read the article

  • How do you go about training a replacement?

    - by SnOrfus
    I recently asked about leaving a position and got a lot of great answers. One of the common threads was that being around to train the new person would be expected and could go a long way. Now considering that (I think) most people don't stay at a company for a long time after they've given notice, and it will take time for the company to interview/hire one - that leaves for a short amount of time to get someone up to speed. I've also never trained anyone before. I did a bunch of tutoring in University and College, but teaching a language/technology is far different from training someone to replace you on your job. So the question is: how do you go about training someone to replace you in a, potentially, short amount of time?

    Read the article

  • JSF 2.2 Update from Ed Burns

    - by arungupta
    In a recent interview the JavaServer Faces specification lead, Ed Burns, gave an update on JSF 2.2. This is a required component of the Java EE 7 platform. The work is expected to wrap up by CY 2012 and the schedule is publicly available. The interview provide an update on how Tenant Scope from CDI and multi-templating will be included. It also provide details on which HTML 5 content categories will be addressed. The EG discussions are mirrored at jsr344-experts@javaserverfaces-spec-public. You can also participate in the discussion by posting a message to users@javaserverfaces-spec-public. All the mailing lists are open for subscription anyway and JIRA for spec provide more details about features targeted for the upcoming release. A blog at J-Development provide complete details about the new features coming in this version. And an Early Draft of the specification is available for some time now.

    Read the article

  • Do we set the bar too high by requiring that code tests not suffer from buffer overflow?

    - by brice
    We are currently recruiting for a Junior Developer position working mainly in C on Linux. As part of the process, we require candidates to complete a code test at their leisure in C. So far we have rejected two candidates on the basis that their code, although readable and in one case rather idiomatic, suffered from buffer overflow errors due to unbounded buffer writes. Are buffer overflows acceptable from a graduate developer? Are we setting the bar too high? What is the expected capability of graduate/Junior engineers? [Edit]: We explicitly ask for error-checked, production quality code. We provide a test & build framework for the candidates

    Read the article

  • Can't use PHP extension Mcrypt in Ubuntu 13.10 (Nginx, PHP-FPM)

    - by Marc-François
    I installed a fresh Ubuntu 13.10 on my laptop. Like I usually do, I install the packages I need for Web development, which are nginx, php5-fpm, mysql, php5-mysql, php5-mcrypt and a few others. After editing some configuration files, this usually works. But today, since 13.10, an error appears instead of the Web page I expected. Laravel requires the Mcrypt PHP extension. The package php5-mcrypt has been installed and reinstalled. The command php -m doesn't seem to show mcrypt. Any idea where the problem could come from? I've done this setup many times and it always worked.

    Read the article

  • AutoSSH for a robust tunnel

    - by Budric
    I'm trying to start an ssh tunnel from A to B and have it run despite things like: period network/wifi drops on A and remote server reboot on B. My ssh tunnel starts using upstart script on A with event start on (net-device-up IFACE=eth0) I've found autossh which is supposed to handle these kinds of things, but had some trouble getting it to work. The upstart executes: autossh -M 0 -2qTN -o "ServerAliveInterval 30" -o "ServerAliveCountMax 2" -L 5678:somehost:5678 user@B However when I log into B and kill -9 that tunnel session, autossh just exits with "Connection to B closed by remote host." That's not what I expected autossh to do. Any advice on how to set this up? Any GUI service monitoring utilities out there that essentially display a green light if a service is up? Thanks.

    Read the article

  • June Edition - Oracle Database Insider

    - by jgelhaus
    Now available.  The June edition of the Oracle Database Insider includes: NEWS June 10: Oracle CEO Larry Ellison Live on the Future of Database Performance At a live webcast on June 10 at Oracle’s headquarters, Oracle CEO Larry Ellison is expected to announce the upcoming availability of Oracle Database In-Memory, which dramatically accelerates business decision-making by processing analytical queries in memory without requiring any changes to existing applications. Read More New Study Confirms Capital Expenditure Savings with Oracle Multitenant A new study finds that Oracle Multitenant, an option of Oracle Database 12c, drives significant savings in capital expenditures by enabling the consolidation of a large number of databases on the same number or fewer hardware resources. Read More VIDEO Oracle Database 12c: Multitenant Environment with Tom Kyte Tom Kyte discusses Oracle Multitenant, followed by a demo of the multitenant architecture that includes moving a pluggable database (PDB) from one multitenant container database to another, cloning a PDB, and creating a new PDB.  and much more.

    Read the article

  • time it takes to develop 4*N lines of code. Nonlinear, but how nonlinear ?

    - by Andrei
    It took me time T to develop program A, 1000 lines of code (SLOC), in certain language/area/complexity. Then how much time it will take to develop program B which is expected 4000 lines, in same area/complexity/language ? I expect it to be 4*N, right ? Any formula how T grows with SLOC ? For contractor, these estimates are important. Is there formula from software enginering books, or from people's experience ? Also, what methods exist to make the code bug-free before it hits QA ?

    Read the article

  • HDMI port not recognized on Sony Vaio

    - by julio
    I am running Ubuntu 11.10 64bit with a Sony VAIO VPC F11. It has an NVIDIA GeForce 310M video card, with the latest Nvidia drivers for the 64 bit linux, and a Windows partition with Win7 64bit. NVIDIA driver version is NVIDIA-Linux-x86_64-280.13 External monitor is Samsung SyncMaster P2770 If I boot into the Windows partition, the HDMI works as expected, with sound and video-- under linux, the HDMI port is not recognized at all, apparently, and provides no signal to the attached monitor. The nividia-settings tool does not recognize any monitor connected to the HDMI port. Disper is installed and cannot recognize an attached external monitor. Can anyone help me diagnose this issue and fix it if possible? The laptop has only the one HDMI port to connect any external monitor, so it I can't get this working I'm stuck using either the laptop screen or Windows. Thanks

    Read the article

  • Numpad doesn't work after booting up - forced to reconnect USB keyboard after startup

    - by HorusKol
    I've tried this on two different USB keyboards - both of which work fine on a different computer running Windows XP. For some reason, the numerical keypad doesn't work probably immediately after booting up - neither the numbers work, nor the 'home' commands and so on that you can use with the numlock off. It doesn't make a difference whether I press numlock on or off - the keypad doesn't work correctly no matter what state this is in. However, once I've booted the machine I can disconnect/reconnect the USB connector for the keyboard, and it will work exactly as expected. I'm running Gnome on Ubuntu 10.04. The only other USB devices connected is a mouse - and I've experienced no problems with that. This is a direct connection to the box (not via an external USB hub)

    Read the article

  • Find vertices of a convex hull

    - by Jeff Bullard
    I am attempting to do this within CGAL. From a 3D point cloud, find the convex hull, then loop over the finite facets of the convex hull and print each facet's vertices. It seems like there should be a straightforward way to do this; I would have expected that 3D polyhedra would own a vector of facet objects, each of which in turn would own a vector of its edges, each of which in turn would own a vector of its vertices, and that their would be some access through this hierarchy using iterators. But so far I have been unable to find a simple way to navigate through this hierarchy (if it exists).

    Read the article

  • Tool to search for packages whose installed version does not match any version from a repository?

    - by Ryan Thompson
    I just upgraded from Lucid to Maverick, and as expected, all my PPAs were disabled. I have re-enabled most of the ones that I want, but I would like to get a list of all packages that I installed from PPAs that I no longer have enabled. I feel that the best way to do this would be to search for all packages where the currently installed version of that package does not match any version from a currently-enabled repository. Is there an easy way to search for such packages. Command-line solutions welcome.

    Read the article

  • Pre order Nokia Lumia 900 from AT&T for $99.99 and Walmart for $49.99

    - by Gopinath
    Nokia Lumia 900, the flagship Windows Phone OS smartphone from Nokia is available for pre-order from AT&T Stores. With a two year contract, you can grab the phone by paying $99.99 online and they are expected to ship a day or two earlier than their official launch in AT&T stores across US. Walmart in an aggressive move, is selling Nokia Lumia 900 for just $49.99 with a two year contract. So you save $50 more. Earlier in January of this year, Nokia unveiled its plans of Lumia 900 launch exclusively for American market. Nokia Lumia 900 features a 4.3 inch Clear Black display, sporting a resolution of 800 x 480 pixels, 1.4 GHz Snapdragon processor, Windows Phone 7.5 (Mango) OS, 8 megapixel rear camera with f2.2/28mm Carl Zeiss lens and dual LED flash, auto-focus and HD (720p) video recording, 1 megapixel front-facing camera for video calls, 512 MB RAM, 16 GB internal memory, 14.5 GB user memory and more. Pre-order Nokia Lumia 900 from AT&T and Walmart

    Read the article

  • Vdpau performance in Precise with Unity 3d

    - by bowser
    vdpau seems to be broken in Precise under Unity 3d. CPU usage ranges around 50-70% for 1080p movies while same movies utilizes around 5-10% in Natty with vdpau enabled (under Unity3d) The card is Nvidia G105m. It doesn't seem to be a Nvidia driver problem because in gnome-shell everything works as expected and I have tried different versions of Nvidia drivers (295.20, 295.33, 295.40 and the latest 302.XX from xorg-edgers) The results are all the same, works in Gnome Shell but not in Unity 3d. Disabling syn to vbank works if movie is not in full screen mode, but it doesn't work for full screen. I have searched around and haven't found much info. I am wondering if others are experiencing the same problem and if there are some known work around that I have missed. Unity 3d is otherwise very nice in Precise, but this is a show stopping issue for me (literally). Thanks. I have filed a bug here https://bugs.launchpad.net/unity/+bug/993397

    Read the article

  • How to use Mercurial's LargeFiles extension? [migrated]

    - by DuncanBoehle
    I use Mercurial for game development, and I'm trying to use the LargeFiles extension included in Mercurial 2.0 to keep track of large binary assets. Unfortunately there isn't a whole lot of documentation on the extension, so I'm not sure how people are expected to use it. For example, is there any way to safely clean out the .hg/largefiles directory? If I'm on the tip revision, and expect to always have internet access, then I don't need the old versions of largefiles cluttering up the repository, since that's the whole point of using the LargeFiles extension. Also, how do I have more fine-grained control over where the largefile store is? I can only assume that it's created somewhere on the computer that ran hg init, but I have no idea about the details. Thanks!

    Read the article

  • Why do meshes show up as bones in the Model class?

    - by Itamar Marom
    Right now I'm working on a 3D game and I've come across something very weird. When I created the model in Blender, I added an armature named "MyBone" to the stage and attached a cube ("MyCube") to it, so that when I move the armature, the cube moves with it. I exported this as an FBX and loaded it as a Model object. What I expected to see was: But what I got was this: I'm really confused. Why is the mesh I created showing up in the bone list? And what's Root Node? Here are the .blend and .fbx files: here or here. Thanks.

    Read the article

  • Double sides face with two normals

    - by Marnix
    I think this isn't possible, but I just want to check this: Is it possible to create a face in opengl that has two normals? So: I want the inside and outside of some cilinder to be drawn, but I want the lights to do as expected and not calculate it for the normal given. I was trying to do this with backface culling off, so I would have both faces, but the light was wrongly calculated of course. Is this possible, or do I have to draw an inside and an outside? So draw twice?

    Read the article

  • iPad Discussion

    - by Dave Campbell
    I had reason to meet up with someone I don't see very often a bit ago. In the course of the conversation, he told me he bought an iPad. I don't know if I was expected to ooh and ahh, but I didn't. After he finished saying how cool it was and how much he and his wife liked it, I commented "no Flash and no Silverlight" after which followed this: Him: "You don't need it, HTML5 can do everything Flash and Silverlight does" Me: "Wait... you're telling me that the iPad converts existing Flash content into HTML5 and then renders it?" Him: "No, but once all the existing sites are converted to HTML 5 it'll be fine and we don't need Flash... or Silverlight" 'all the existing sites' ... huh ... I didn't get a notice, maybe they're doing them alphabetically or something :) Ok Spanky... you keep drinking that Kool-Aide from Steve, I've got mine... it's blue with Silverlight:

    Read the article

  • Why does my display keep turning off every 10 minutes?

    - by George Edison
    I have installed Ubuntu 11.10 (Oneiric) in VirtualBox as well as virtualbox-guest-additions . The display resolution adapts to the size of the window as expected, but the display turns off after 10 minutes of inactivity. Thinking there was some sort of power management issue at play, I went to Power in the settings dialog: There doesn't really seem to be anything there that mentions "turn off display after xxx minutes" so I assume everything is configured correctly there. Next I went to Screen and found an option there "Turn off after:". Aha! I thought. Now I have found the option - but alas: even after setting it to "never" and restarting multiple times, the display still shuts off after 10 minutes. What am I missing? What option am I overlooking?

    Read the article

  • How did programmers resolve their problems before the internet?

    - by 9a3eedi
    When programming, anytime I get stuck, perhaps with a compiler error that doesn't make sense, or from a GUI function that didn't do what I expected, I automatically google my problem, find someone else that faced the same thing, and read what's going on and why I'm getting the problem. Before the internet, how did people handle these situations? People used to read books and manuals more, I know. But books don't explain everything, like the odd compiler problem that you get sometimes, or nothing showing up on your screen despite you clearly writing correct OpenGL code. How did people cope when facing challenges? Did they simply "bash their head" on the wall till they figured it out? Is there something people used to do regularly on the side that gave them the ability to get themselves unstuck more easily? Were libraries/compilers much simpler back then? I've been asking this question because I sometimes feel guilty depending on Google so much when I'm pretty sure programmers before my time were more independent when it comes to facing these matters.

    Read the article

< Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >