Search Results

Search found 31206 results on 1249 pages for 'version detection'.

Page 972/1249 | < Previous Page | 968 969 970 971 972 973 974 975 976 977 978 979  | Next Page >

  • The Minimalist Approach to Content Governance - Create Phase

    - by Kellsey Ruppel
     Originally posted by John Brunswick. In this installment of our Minimalist Approach to Content Governance we finally get to the fun part of the content creation process! Once the content requester has addressed the items outlined in the Request Phase it is time to setup and begin the production of content.   For this to be done correctly it is important the the content be assigned appropriate workflow and security information. As in our prior phase, let's take a look at what can be done to streamline this process - as contributors are focused on getting information to their end users as quickly as possible. This often means that details around how to ensure that the materials are properly managed can be overlooked, but fortunately there are some techniques that leverage our content management system's native capabilities to automatically take care of some of the details. 1. Determine Access Why - Even if content is not something that needs to restricted due to security reasons, it is helpful to apply access rights so that the content ends up being visible only to users that it relates to. This will greatly improve user experience. For instance, if your team is working on a group project many of your fellow company employees do not need to see the content that is being worked on for that project. How - Make use of native content features that allow propagation of security and meta data from parent folders within your content system that have been setup for your particular effort. This makes it painless to enforce security, as well as meta data policies for even the most unorganized users. The default settings at a parent level can be set once the content creation request has been accepted and a location in the content management system is assigned for your specific project. Impact - Users can find information will less effort, as they will only be exposed to what they need for their work and can leverage advanced search features to take advantage of meta data assigned to content. The combination of default security and meta data will also help in running reports against the content in the Manage and Retire stages that we will discuss in the next 2 posts. 2. Assign Workflow (optional depending on nature of content) Why - Every case for workflow is going to be a bit different, but it generally involves ensuring that content conforms to management, legal and or editorial requirements. How - Oracle's Universal Content Management offers two ways of helping to workflow content without much effort. Workflow can be applied to content based on Criteria acting on meta data or explicitly assigned to content with a Basic workflow. Impact - Any content that needs additional attention before release is addressed, allowing users to comment and version until a suitable result is reached. By using inheritance from parent folders within the content management system content can automatically be given the right security, meta data and workflow information for a particular project's content. This relieves the burden of doing this for every piece of content from management teams and content contributors. We will cover more about the management phase within the content lifecycle in our next installment.

    Read the article

  • PeopleSoft Grants & the Federal Agency Letter of Credit Draw Changes

    - by Mark Rosenberg
    For decades, most, if not all, US Federal agencies that sponsor research allowed grant recipients to request and receive payments using pooled accounts, commonly known as pooled letter of credit (LOC) draws. This enabled organizations, such as universities and hospitals, fast and efficient access to reimbursement of the expenditures they incurred conducting research across a portfolio of grants. To support this business practice, the PeopleSoft Grants solution has delivered an LOC Draw report to provide the total request amount along with all of the supporting invoice details for reconciliation and audit purposes. Now, in an attempt to provide greater transparency, eliminate fraud, strengthen accountability for grant-related financial transactions, and simplify grant award closeout, many US Federal sponsors are transitioning from the “pooling” letter of credit draw method to requesting on a “grant-by-grant” basis. The National Science Foundation, the second largest issuer of Federal awards, already transitioned to detailed grant draws in 2013. And, in response to the U.S. Department of Health and Human Services (HHS) directive to HHS-supported Agencies, the largest Federal awards sponsor, the National Institutes of Health (NIH), will fully transition to the new HHS subaccount draw method. This will require NIH award recipients to request payments based on actual expenses incurred on an award-by-award basis. NIH is expected to fully transition to this new draw method by the end of Federal fiscal year 2015.  (The NIH had planned to fully transition to this new method by the end of fiscal 2014; however, the impact to institutions was deemed to be significant enough that a reprieve was recently granted.) In light of these new Federal draw requirements, we have recently released these new features to aid our customers on both PeopleSoft Grants releases 9.1 and 9.2:1. Federal Award Identification Number on the Proposal and Award Profile 2. Letter of credit fields on contract lines to support award basis draws and comply with Federal close out mandates3. Process to produce both pro forma and final LOC Draw Reports in BI Publisher report format4. Subacccount ID field on the LOC Summary and a new BI Publisher version of the LOC Summary report 5. Added Subaccount Field and contract info to be displayed on the LOC summary page6. Ability to generate by a variety of dimensions pro forma and invoiced draw listings 7. Queries for generation and manipulation of data to upload into sponsor payment request systems and perform payment matching8. Contracts LOC Close Out query to quickly review final balances prior to initiating final draws and preparing Federal Financial Reports prior to close The PeopleSoft Development team actively monitors this and other major Federal changes and continues working closely with the Grants Product Advisory Group of the Higher Education User Group to ensure a clear understanding of what our customers need in order to transition to new approaches for doing business with the Federal government. For more information regarding the enhancements to the PeopleSoft Grants solution, existing customers can login to My Oracle Support and review the Enhancements to Letter of Credit Process (Doc ID 1912692.1) associated with resolution ID 904830. This enhanced LOC functionality is available in both PeopleSoft FSCM 9.1 Bundle #31 and PeopleSoft FSCM 9.2 Update Image 8.

    Read the article

  • Resources to Help You Getting Up To Speed on ADF Mobile

    - by Joe Huang
    Hi, everyone: By now, I hope you would have a chance to review the sample applications and try to deploy it.  This is a great way to get started on learning ADF Mobile.  To help you getting started, here is a central list of "steps to get up to speed on ADF Mobile" and related resources that can help you developing your mobile application. Check out the ADF Mobile Landing Page on the Oracle Technology Network. View this introductory video. Read this Data Sheet and FAQ on ADF Mobile. JDeveloper 11.1.2.3 Download. Download the generic version of JDeveloper for installation on Mac. Note that there are workarounds required to install JDeveloper on a Mac. Download ADF Mobile Extension from JDeveloper Update Center or Here. Please note you will need to configure JDeveloper for Internet access (In HTTP Proxy preferences) in order the install the extension, as the installation process will prompt you for a license that's linked off Oracle's web site. View this end-to-end application creation video. View this end-to-end iOS deployment video if you are developing for iOS devices. Configure your development environment, including location of the SDK, etc in JDeveloper-Tools-Preferences-ADF Mobile dialog box.  The two videos above should cover some of these configuration steps. Check out the sample applications shipped with JDeveloper, and then deploy them to simulator/devices using the steps outlined in the video above.  This blog entry outlines all sample applications shipped with JDeveloper. Develop a simple mobile application by following this tutorial. Try out the Oracle Open World 2012 Hands on Lab to get a sense of how to programmatically access server data.  You will need these source files. Ask questions in the ADF/JDeveloper Forum. Search ADF Mobile Preview Forum for entries from ADF Mobile Beta Testing participants. For all other questions, check out this exhaustive and detailed ADF Mobile Developer Guide. If something does not seem right, check out the ADF Mobile Release Note. Thanks, Oracle ADF Mobile Product Management Team

    Read the article

  • Got Samba, Got PyNeighbourhood but still no connection. What else do I need?

    - by Frank A
    I am sure I had already hit post before but then could only find it by backing through browser. Was it deleted? is the question too dumb, sorry that I do not know the right jargon just trying to get answers to my problem anyway have reworded stuff a bit This seems to be a number one requirement for lots of people and 2 months on from setting up my Ubuntu pc, I am still unable to get a lasting connection in either direction. Adding a windows pc to a network is so easy... just a few clicks and get on with using it all. Using all command approaches and modifying configuration files is hardly user friendly. Googling brings up thousands of solutions but mostly they are too techy or assume the user is fully aware of how to use Linux. I do realise that their must be a lot of flavours for connecting to networks. So far I have installed Samba and fiddled with its config file. The day I did all that it worked from XP to Ubuntu. When I came back two days later to transfer my data over it would not connect. Although the the share does show up in Windows (XP) My Network Places. Today I installed PyNeighbourhood and this shows the Ubuntu box and all of the shares I had created at some point on Ubuntu and it even shows this under the XP workgroup name. But instructions on setting the connection up seem to relate to an earlier version and nothing seems to work there either. (I unshared most of those test folders but they still show up her but that is another question. When I click on mount- I can only click on one on the Ubuntu machine, there is one with no name so I assume this to be my attempt to add one XP Shared drive using ipaddress, I get errors. (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", mount error(6): No such device or address Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) OK tried to find the manual referred to... only an old comment that manual would be produced for future versions. I saw in another thread that Winbind is needed as well or at least I assume as well? Totally lost again? Please help, what else needs to be installed to connect to win pcs on the network.

    Read the article

  • Need help on a problemset in a programming contest

    - by topher
    I've attended a local programming contest on my country. The name of the contest is "ACM-ICPC Indonesia National Contest 2013". The contest has ended on 2013-10-13 15:00:00 (GMT +7) and I am still curious about one of the problems. You can find the original version of the problem here. Brief Problem Explanation: There are a set of "jobs" (tasks) that should be performed on several "servers" (computers). Each job should be executed strictly from start time Si to end time Ei Each server can only perform one task at a time. (The complicated thing goes here) It takes some time for a server to switch from one job to another. If a server finishes job Jx, then to start job Jy it will need an intermission time Tx,y after job Jx completes. This is the time required by the server to clean up job Jx and load job Jy. In other word, job Jy can be run after job Jx if and only if Ex + Tx,y = Sy. The problem is to compute the minimum number of servers needed to do all jobs. Example: For example, let there be 3 jobs S(1) = 3 and E(1) = 6 S(2) = 10 and E(2) = 15 S(3) = 16 and E(3) = 20 T(1,2) = 2, T(1,3) = 5 T(2,1) = 0, T(2,3) = 3 T(3,1) = 0, T(3,2) = 0 In this example, we need 2 servers: Server 1: J(1), J(2) Server 2: J(3) Sample Input: Short explanation: The first 3 is the number of test cases, following by number of jobs (the second 3 means that there are 3 jobs for case 1), then followed by Ei and Si, then the T matrix (sized equal with number of jobs). 3 3 3 6 10 15 16 20 0 2 5 0 0 3 0 0 0 4 8 10 4 7 12 15 1 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 8 10 4 7 12 15 1 4 0 50 50 50 50 0 50 50 50 50 0 50 50 50 50 0 Sample Output: Case #1: 2 Case #2: 1 Case #3: 4 Personal Comments: The time required can be represented as a graph matrix, so I'm supposing this as a directed acyclic graph problem. Methods I tried so far is brute force and greedy, but got Wrong Answer. (Unfortunately I don't have my code anymore) Could probably solved by dynamic programming too, but I'm not sure. I really have no clear idea on how to solve this problem. So a simple hint or insight will be very helpful to me.

    Read the article

  • Does the method of adjustment matter, or just the final calibration?

    - by Steve
    A company produces software (and hardware) that is used to both perform automatic adjustments on electronic test equipment as well as perform calibrations of the same equipment. The results of the calibrations are put onto a certificate of calibration that is sent to the customer along with the equipment. This calibration certificate states various conditions of the calibration, such as what hardware (models/serial numbers) and software (version) was used to perform the calibration, as well as things like environmental conditions, etc. Making the assumption that the software used to produce the data (and listed on the calibration certificate) used on the certificate of calibration must have gone through a "test/release" process and must be considered "released" software - does this also mean that the software used for adjustment must also be released? I believe that the method (software/environmental conditions/etc) used or present during adjustment doesn't matter, all that really matters is the end result of the calibration, the conditions present during the calibration, and whether or not the equipment was within the specifications. The real question I'm hoping to get answered: Is there a reputable source (e.g. NIST or somewhere similar) that addresses this question? (I have searched...) The thinking is that during high volume production runs, the "unreleased" system can be used to perform adjustments, as long as a released system is used to perform the calibrations, since the time required to perform the adjustments is much longer than the calibration. This unreleased system will eventually become released for use, but currently is not. Also, please not that there is a distinction between "adjustment" and "calibration". The definition from BIPM International vocabulary of metrology, 2.39: Operation that, under specified conditions, in a first step, establishes a relation between the quantity values with measurement uncertainties provided by measurement standards and corresponding indications with associated measurement uncertainties (of the calibrated instrument or secondary standard) and, in a second step, uses this information to establish a relation for obtaining a measurement result from an indication. Followed by NOTE 2 (emphasis in original text): Calibration should not be confused with adjustment of a measuring system, often mistakenly called "self-calibration", nor with verification of calibration As a side note, I'm not sure why this got down voted. It's regarding software and it's use before and after release for use. I believe there is a best practice that can be applied and this is (hopefully) not primarily opinion based.

    Read the article

  • Unity works on my PC but not on the Server. What did I miss?

    - by Erik France
    I have a web service using Microsoft Unity to hook the pieces together.  It all works fine on my PC but when I put it on the web server, I receive this error message: System.ServiceModel.FaultException`1[System.ServiceModel.ExceptionDetail]: The value of the property 'type' cannot be parsed. The error is: Method 'GetClaimsForUser' in type 'WebService.Implementation.ClaimsRetriever' from assembly 'WebService.Implementation, Version=1.0.0.0, Culture=neutral, PublicKeyToken=62cac0f1a908971a' does not have an implementation. If I look at the web.config, I see the following: <unity>     <typeAliases>       <typeAlias alias="ITokenGenerator" type="WebService.Interfaces.ITokenGenerator, WebService.Interfaces" />       <typeAlias alias="TokenGenerator" type="WebService.Implementation.TokenGenerator, WebService.Implementation" />       <typeAlias alias="IClaimsRetriever" type="WebService.Interfaces.IClaimsRetriever, WebService.Interfaces" />       <typeAlias alias="ClaimsRetriever" type="WebService.Implementation.ClaimsRetriever, WebService.Implementation" />       <typeAlias alias="TokenGeneratorSettings" type="WebService.Implementation.TokenGeneratorSettings, WebService.Implementation" />       <typeAlias alias="String" type="System.String, mscorlib" />     </typeAliases>     <containers>       <container>         <types>           <type type="ITokenGenerator" mapTo="TokenGenerator">             <typeConfig extensionType="Microsoft.Practices.Unity.Configuration.TypeInjectionElement, Microsoft.Practices.Unity.Configuration">               <constructor>                 <param name="retriever" parameterType="IClaimsRetriever">                   <dependency />                 </param>                 <param name="settings" parameterType="TokenGeneratorSettings">                   <dependency />                 </param>               </constructor>             </typeConfig>           </type>           <type type="IClaimsRetriever" mapTo="ClaimsRetriever">             <typeConfig extensionType="Microsoft.Practices.Unity.Configuration.TypeInjectionElement, Microsoft.Practices.Unity.Configuration">               <constructor>                 <param name="connectionStringName" parameterType="String">                   <value value="devDatabase" type="String" />                 </param>               </constructor>             </typeConfig>           </type>         </types>       </container>     </containers>   </unity> I have another web service, using an almost identical config running on the web server.  But this new web service will not run. Any ideas on what I have not told Unity to do?  Or maybe what I told Unity to do incorrectly?

    Read the article

  • Top down space game control problem

    - by Phil
    As the title suggests I'm developing a top down space game. I'm not looking to use newtonian physics with the player controlled ship. I'm trying to achieve a control scheme somewhat similar to that of FlatSpace 2 (awesome game). I can't figure out how to achieve this feeling with keyboard controls as opposed to mouse controls though. Any suggestions? I'm using Unity3d and C# or javaScript (unityScript or whatever is the correct term) works fine if you want to drop some code examples. Edit: Of course I should describe FlatSpace 2's control scheme, sorry. You hold the mouse button down and move the mouse in the direction you want the ship to move in. But it's not the controls I don't know how to do but rather the feeling of a mix of driving a car and flying an aircraft. It's really well made. Youtube link: FlatSpace2 on iPhone I'm not developing an iPhone game but the video shows the principle of the movement style. Edit 2 As there seems to be a slight interest, I'll post the version of the code I've used to continue. It works good enough. Sometimes good enough is sufficient! using UnityEngine; using System.Collections; public class ShipMovement : MonoBehaviour { public float directionModifier; float shipRotationAngle; public float shipRotationSpeed = 0; public double thrustModifier; public double accelerationModifier; public double shipBaseAcceleration = 0; public Vector2 directionVector; public Vector2 accelerationVector = new Vector2(0,0); public Vector2 frictionVector = new Vector2(0,0); public int shipFriction = 0; public Vector2 shipSpeedVector; public Vector2 shipPositionVector; public Vector2 speedCap = new Vector2(0,0); void Update() { directionModifier = -Input.GetAxis("Horizontal"); shipRotationAngle += ( shipRotationSpeed * directionModifier ) * Time.deltaTime; thrustModifier = Input.GetAxis("Vertical"); accelerationModifier = ( ( shipBaseAcceleration * thrustModifier ) ) * Time.deltaTime; directionVector = new Vector2( Mathf.Cos(shipRotationAngle ), Mathf.Sin(shipRotationAngle) ); //accelerationVector = Vector2(directionVector.x * System.Convert.ToDouble(accelerationModifier), directionVector.y * System.Convert.ToDouble(accelerationModifier)); accelerationVector.x = directionVector.x * (float)accelerationModifier; accelerationVector.y = directionVector.y * (float)accelerationModifier; // Set friction based on how "floaty" controls you want shipSpeedVector.x *= 0.9f; //Use a variable here shipSpeedVector.y *= 0.9f; //<-- as well shipSpeedVector += accelerationVector; shipPositionVector += shipSpeedVector; gameObject.transform.position = new Vector3(shipPositionVector.x, 0, shipPositionVector.y); } }

    Read the article

  • Should EICAR be updated to test the revision of Antivirus system?

    - by makerofthings7
    I'm posting this here since programmers write viruses, and AV software. They also have the best knowledge of heuristics and how AV systems work (cloaking etc). The EICAR test file was used to functionally test an antivirus system. As it stands today almost every AV system will flag EICAR as being a "test" virus. For more information on this historic test virus please click here. Currently the EICAR test file is only good for testing the presence of an AV solution, but it doesn't check for engine file or DAT file up-to-dateness. In other words, why do a functional test of a system that could have definition files that are more than 10 years old. With the increase of zero day threats it doesn't make much sense to functionally test your system using EICAR. That being said, I think EICAR needs to be updated/modified to be effective test that works in conjunction with an AV management solution. This question is about real world testing, without using live viruses... which is the intent of the original EICAR. That being said I'm proposing a new EICAR file format with the appendage of an XML blob that will conditionally cause the Antivirus engine to respond. X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-EXTENDED-ANTIVIRUS-TEST-FILE!$H+H* <?xml version="1.0"?> <engine-valid-from>2010-1-1Z</engine-valid-from> <signature-valid-from>2010-1-1Z</signature-valid-from> <authkey>MyTestKeyHere</authkey> In this sample, the antivirus engine would only alert on the EICAR file if both the signature or engine file is equal to or newer than the valid-from date. Also there is a passcode that will protect the usage of EICAR to the system administrator. If you have a backgound in "Test Driven Design" TDD for software you may get that all I'm doing is applying the principals of TDD to my infrastructure. Based on your experience and contacts how can I make this idea happen?

    Read the article

  • Seven Accounting Changes for 2010

    - by Theresa Hickman
    I read a very interesting article called Seven Accounting Changes That Will Affect Your 2010 Annual Report from SmartPros that nicely summarized how 2010 annual financial statements will be impacted.  Here’s a Reader’s Digest version of the changes: 1.  Changes to revenue recognition if you sell bundled products with multiple deliverables: Old Rule: You needed to objectively establish the “fair value” of each bundled item. So if you sold a dishwasher plus installation and could not establish the fair value of the installation, you might have to delay recognizing revenue of the dishwasher days or weeks later until it was installed. New Rule (ASU 2009-13): “Objective” proof of each service or good is no longer required; you can simply estimate the selling price of the installation and warranty. So the dishwasher vendor can recognize the dishwasher revenue immediately at the point of sale without waiting a few weeks for the installation. Then they can recognize the estimated value of the installation after it is complete. 2.  Changes to revenue recognition for devices with embedded software: Old Rule: Hardware devices with embedded software, such as the iPhone, had to follow stringent software revrec rules. This forced Apple to recognize iPhone revenues over two years, the period of time that software updates were provided. New Rule (ASU 2009-14): Software revrec rules no longer apply to these devices with embedded software; these devices can now follow ASU 2009-13. This allows vendors, such as Apple, to recognize revenue sooner. 3.  Fair value disclosures: Companies (both public and private) now need to spend extra time gathering, summarizing, and disclosing information about items measured at fair value, such as significant transfers in and out of Level 1(quoted market price), Level 2 (valuation based on observable markets), and Level 3 (valuations based on internal information). 4.  Consolidation of variable interest entities (a.k.a special purpose entities): Consolidation rules for variable interest entities now require a qualitative, not quantitative, analysis to determine the primary beneficiary. Instead of simply looking at the percentage of voting interests, the primary beneficiary could have less than the majority interests as long as it has the power to direct the activities and absorb any losses.  5.  XBRL: Starting in June 2011, all U.S. public companies are required to file financial statements to the SEC using XBRL. Note: Oracle supports XBRL reporting. 6.  Non-GAAP financial disclosures: Companies that report non-GAAP measures of performance, such as EBITDA in SEC filings, have more flexibility.  The new interpretations can be found here: http://www.sec.gov/divisions/corpfin/guidance/nongaapinterp.htm.  7.  Loss contingencies disclosures: Companies should expect additional scrutiny of their loss disclosures, such as those from litigation losses, in their annual financial statements. The SEC wants more disclosures about loss contingencies sooner instead of after the cases are settled.

    Read the article

  • Maximizing the Value of Software

    - by David Dorf
    A few years ago we decided to increase our investments in documenting retail processes and architectures.  There were several goals but the main two were to help retailers maximize the value they derive from our software and help system integrators implement our software faster.  The sale is only part of our success metric -- its actually more important that the customer realize the benefits of the software.  That's when we actually celebrate. This week many of our customers are gathered in Chicago to discuss their successes during our annual Crosstalk conference.  That provides the perfect forum to announce the release of the Oracle Retail Reference Library.  The RRL is available for free to Oracle Retail customers and partners.  It contains 1000s of hours of work and represents years of experience in the retail industry.  The Retail Reference Library is composed of three offerings: Retail Reference Model We've been sharing the RRM for several years now, with lots of accolades.  The RRM is a set of business process diagrams at varying levels of granularity. This release marks the debut of Visio documents, which should make it easier for retailers to adopt and edit the diagrams.  The processes represent an approximation of the Oracle Retail software, but at higher levels they are pretty generic and therefore usable with other software as well.  Using these processes, the business and IT are better able to communicate the expectations of the software.  They can be used to guide customization when necessary, and help identify areas for optimization in the organization. Retail Reference Architecture When embarking on a software implementation project, it can be daunting to start from a blank sheet of paper.  So we offer the RRA, a comprehensive set of documents that describe the retail enterprise in terms of logical architecture, physical deployments, and systems integration.  These documents and diagrams describe how all the systems typically found in a retailer enterprise work together.  They serve as a way to jump-start implementations using best practices we've captured over the years. Retail Semantic Glossary Have you ever seen two people argue over something because they're using misaligned terminology?  Its a huge waste and happens all the time.  The Retail Semantic Glossary is a simple application that allows retailers to define terms and metrics in a centralized database.  This initial version comes with limited content with the goal of adding more over subsequent releases.  This is the single source for defining key performance indicators, metrics, algorithms, and terms so that the retail organization speaks in a consistent language. These three offerings are downloaded from MyOracleSupport separately and linked together using the start page above.  Everything is navigated using a Web browser.  See the Oracle Retail Documentation blog for more details.

    Read the article

  • Second Monitor stays black/in power save mode

    - by Rob
    I'm using two Monitors, a Belinea o.display 1 (Recognized as a Rogen Tech Distribution Inc 20" by Ubuntu, but working fine) on the DVI-Output (connected via DVI-to-VGA-adapter) as my primary Monitor and a Dell 19" (Recognized correctly) on the HDMI-output (via HDMI-to-DVI adapter) as secondary monitor. The graphics controller is a GeForce 9500 GS. I'm running a fully updated Ubuntu 13.04 with nouveau 1:1.0.7-0ubuntu1. The problem is that the second monitor (Dell) never seems to come out of standby during boot: the screen stays black and the status led on the monitor stays orange (it's green when it's on). It is correctly recognized an the size of the desktop is set accordingly, it just stays black. Changing any setting via xrandr/arandr/etc. does nothing. The on-screen-menu of the monitor reports it to be in power save mode. When using the proprietary NVIDIA-Drivers, the second monitor works just find. But these drivers cause a lot of other problems on my system, so i would really like to avoid them. On Ubuntu 12.10 i had found a workaround: When moving the relative position of the second monitor slightly down and the up again, it would turn on and function normally: xrandr --output DVI-I-1 --mode 1680x1050 --pos 1280x0 --rotate normal --output HDMI-1 --mode 1280x1024 --pos 0x88 --rotate normal sleep 2 xrandr --output DVI-I-1 --mode 1680x1050 --pos 1280x0 --rotate normal --output HDMI-1 --mode 1280x1024 --pos 0x0 --rotate normal This workaround stop working after the update to 13.04, and now i'm looking for a new solution. Has anyone experienced something similarity? xrandr output: Screen 0: minimum 320 x 200, current 2960 x 1050, maximum 8192 x 8192 DVI-I-1 connected 1680x1050+0+0 (normal left inverted right x axis y axis) 433mm x 270mm 1680x1050 60.0*+ 1280x1024 75.0 60.0 1280x960 60.0 1152x864 75.0 1024x768 75.1 72.0 70.1 60.0 832x624 74.6 800x600 72.2 75.0 60.3 56.2 640x480 72.8 75.0 66.7 60.0 720x400 70.1 HDMI-1 connected 1280x1024+1680+0 (normal left inverted right x axis y axis) 376mm x 301mm 1280x1024 60.0*+ 75.0 1152x864 75.0 1024x768 75.1 60.0 800x600 75.0 60.3 640x480 75.0 60.0 720x400 70.1 lshw -c video: *-display Beschreibung: VGA compatible controller Produkt: G96 [GeForce 9500 GS] Hersteller: NVIDIA Corporation Physische ID: 0 Bus-Informationen: pci@0000:01:00.0 Version: a1 Breite: 64 bits Takt: 33MHz Fähigkeiten: pm msi pciexpress vga_controller bus_master cap_list rom Konfiguration: driver=nouveau latency=0 Ressourcen: irq:16 memory:fa000000-faffffff memory:d0000000-dfffffff memory:f8000000-f9ffffff ioport:df00(Größe=128) memory:fb000000-fb07ffff Thanks for your help!

    Read the article

  • Small hiccup with VMware Player after upgrading to Ubuntu 12.04

    The upgrade process Finally, it was time to upgrade to a new LTS version of Ubuntu - 12.04 aka Precise Pangolin. I scheduled the weekend for this task and despite the nickname of Mauritius (Cyber Island) it took roughly 6 hours to download nearly 2.400 packages. No problem in general, as I have spare machines to work on, and it was weekend anyway. All went very smooth and only a few packages required manual attention due to local modifications in the configuration. With the new kernel 3.2.0-24 it was necessary to reboot the system and compared to the last upgrade, I got my graphical login as expected. Compilation of VMware Player 4.x fails A quick test on the installed applications, Firefox, Thunderbird, Chromium, Skype, CrossOver, etc. reveils that everything is fine in general. Firing up VMware Player displays the known kernel mod dialog that requires to compile the modules for the newly booted kernel. Usually, this isn't a big issue but this time I was confronted with the situation that vmnet didn't compile as expected ("Failed to compile module vmnet"). Luckily, this issue is already well-known, even though with "Failed to compile module vmmon" as general reason but nevertheless it was very easy and quick to find the solution to this problem. In VMware Communities there are several forum threads related to this topic and VMware provides the necessary patch file for Workstation 8.0.2 and Player 4.0.2. In case that you are still on Workstation 7.x or Player 3.x there is another patch file available. After download extract the file like so: tar -xzvf vmware802fixlinux320.tar.gz and run the patch script as super-user: sudo ./patch-modules_3.2.0.sh This will alter the existing installation and source files of VMware Player on your machine. As last step, which isn't described in many other resources, you have to restart the vmware service, or for the heart-fainted, just reboot your system: sudo service vmware restart This will load the newly created kernel modules into your userspace, and after that VMware Player will start as usual. Summary Upgrading any derivate of Ubuntu, in my case Xubuntu, is quick and easy done but it might hold some surprises from time to time. Nonetheless, it is absolutely worthy to go for it. Currently, this patch for VMware is the only obstacle I had to face so far and my system feels and looks better than before. Happy upgrade! Resources I used the following links based on Google search results: http://communities.vmware.com/message/1902218#1902218http://weltall.heliohost.org/wordpress/2012/01/26/vmware-workstation-8-0-2-player-4-0-2-fix-for-linux-kernel-3-2-and-3-3/ Update on VMware Player 4.0.3 Please continue to read on my follow-up article in case that you upgraded either VMware Workstation 8.0.3 or VMware Player 4.0.3.

    Read the article

  • How to deal with MySQL Connector/ODBC error "Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock'"

    - by user12653020
    I am sure many users run into a mysterious problem when perfectly working ODBC configurations started failing with errors like: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' The above error message might be preceded with something like [nxDc[yQ]. At the same time odbc.ini specifies in its DSN different SOCKET=/tmp/mysql.sock or a TCP connection SERVER=<remote_host_or_ip>. The question is, what had happened that the ODBC driver started to ignore the DSN options? The clue lies in the corrupted string [nxDc[yQ], which actually was [UnixODBC][MySQL] with each 2nd symbol removed. This is the case of bad conversion from SQLCHAR to SQLWCHAR. The UnixODBC driver manager took a single-byte character string from the client application and tried to convert it into the wide (multi-byte) characters for the Unicode version of MyODBC driver: Initially the piece of the connection string was represented by 1-byte chars like: [S][E][R][V][E][R][=][m][y][h][o][s][t][;] after the bad conversion to wide chars (commonly 2-byte UTF-16) [SE][RV][ER][=m][yh][os][t;] instead of [S\0][E\0][R\0][V\0][E\0][R\0][=\0][m\0][y\0][h\0][o\0][s\0][t\0][;\0] Naturally, the MyODBC driver could not parse the bad string and tried to use the default connection type (SOCKET) with the default value (/var/lib/mysql/mysql.sock) Now we know what happened, but why it happened? In most cases it happened because of using ODBCManageDataSourcesQ4 utility or its older analog ODBCConfig. When registering ODBC drivers they put lots of additional options and one of these options badly affects the UnixODBC driver manager itself. The solution is simple - remove or comment out the option in odbcinst.ini file (it is empty by default) set for the driver: [MySQL ODBC 5.2.6 Driver] Description    = Driver         = /home/dbs/myodbc526/lib/libmyodbc5w.so Driver64       = /home/dbs/myodbc526/lib/libmyodbc5w.so Setup          = /home/dbs/myodbc526/lib/libmyodbc5S.so Setup64        = /home/dbs/myodbc526/lib/libmyodbc5S.so UsageCount     = 1 CPTimeout      = 0 CPTimeToLive   = 0 IconvEncoding  =  # <--------- remove this line Trace          = TraceFile      = TraceLibrary   = After applying this simple solution (remove the line with IconvEncoding = ) everything came to normal. Prior to removing that line I tried putting different encoding names there, but the result was not good, so I really don't know how to properly use it. Unfortunately, UnixODBC manuals say nothing about it. Therefore, removing this option was the only way to get things done.

    Read the article

  • What shall I include in a 10 week web technologies course?

    - by Iain
    In September I will be teaching a university module on web technologies. This session will be available to 1st year (freshman) students who don't necessarily have any programming knowledge or know how the web works. In the 2nd semester I will be teaching Flash, which is my specialism, so I know exactly what I am going to teach, but in the 1st semester I will be teaching them web standards technologies - HTML, CSS, JS, jQuery, PHP and MySQL. Where I need advice is how to proportion the emphasis for each part, and which parts of each technology to cover. Another real issue I'm struggling with is how much of the bad old ways should I teach them? Do they need to know about bold as well as strong, etc. UPDATE: based, on your feedback I will only be teaching the latest version of everything - CSS3, HTML5 etc. I'm not sure exactly how long the semester will be but I'm guessing about 10-12 weeks. Each session is a 2 hour lab. Obviously there's only so much I can cover in that time and it will be up to the students to go a research this stuff properly on W3 schools etc. My ideas so far were: Lesson 0 - Course intro and overview of the current tech landscape. What is out there, what will we be learning, what won't we. What is a web server, URL etc. Looking at different example websites and discussing how they work. Lesson 1 - HTML basics (head, body, title, img, table, a, lists, h1, strong etc) Lesson 2 - CSS for styling and layout - fonts, webfonts, float etc Lesson 3 - Intro to programming JS (variables, loops, conditionals, functions) Lesson 4 - more JS programming fundamentals, DOM manipulation Lesson 5 - jQuery - making things fly about and look cool Lesson 6 - XML and Ajax Lesson 7 - PHP basics - syntax, server-side principles Lesson 8 - PHP and MySQL - forms, logins, saving user info Lesson 9 - don't know Lesson 10 - don't know Please let me know if you think this is the right order, what have I missed, how to use any spare sessions etc. Thanks :) UPDATE BASED ON RESPONSES: Thanks for all your responses - some great stuff. To be absolutely clear, this is not a computer science course, it is a practical module on a creative technology course. The emphasis definitely has to be on making cool things work rather than understanding how the backbone of the internet works. That can come later, if the students are interested. At the end of the module I would like the students to be able to produce a web page or pages that does something cool, using some or all of the technologies I cover. Many of these topics are of course far beyond the scope of a 2 hour session, however I do not have the option of reducing the syllabus, I will just have to explain what the technology does and encourage the student to research it in their own time.

    Read the article

  • How to use DoDirect/Paypal Pro in asp.net?

    - by ptahiliani
    using System;using System.Collections.Generic;using System.Linq;using System.Web;using System.Web.UI;using System.Web.UI.WebControls;using System.Net;using System.IO;using System.Collections;public partial class Default2 : System.Web.UI.Page{    protected void Page_Load(object sender, EventArgs e)    {    }    protected void Button1_Click(object sender, EventArgs e)    {        //API Credentials (3-token)        string strUsername = "<<enter your sandbox username here>>";        string strPassword = "<<enter your sandbox password here>>";        string strSignature = "<<enter your signature here>>";        string strCredentials = "USER=" + strUsername + "&PWD=" + strPassword + "&SIGNATURE=" + strSignature;        string strNVPSandboxServer = "https://api-3t.sandbox.paypal.com/nvp";        string strAPIVersion = "2.3";        string strNVP = strCredentials + "&METHOD=DoDirectPayment" +        "&CREDITCARDTYPE=" + "Visa" +        "&ACCT=" + "4710496235600346" +        "&EXPDATE=" + "10" + "2017" +        "&CVV2=" + "123" +        "&AMT=" + "12.34" +        "&FIRSTNAME=" + "Demo" +        "&LASTNAME=" + "User" +        "&IPADDRESS=192.168.2.236" +        "&STREET=" + "Lorem-1" +        "&CITY=" + "Lipsum-1" +        "&STATE=" + "Lorem" +        "&COUNTRY=" + "INDIA" +        "&ZIP=" + "302004" +        "&COUNTRYCODE=IN" +        "&PAYMENTACTION=Sale" +        "&VERSION=" + strAPIVersion;        try        {            //Create web request and web response objects, make sure you using the correct server (sandbox/live)            HttpWebRequest wrWebRequest = (HttpWebRequest)WebRequest.Create(strNVPSandboxServer);            wrWebRequest.Method = "POST";            StreamWriter requestWriter = new StreamWriter(wrWebRequest.GetRequestStream());            requestWriter.Write(strNVP);            requestWriter.Close();            // Get the response.            HttpWebResponse hwrWebResponse = (HttpWebResponse)wrWebRequest.GetResponse();            StreamReader responseReader = new StreamReader(wrWebRequest.GetResponse().GetResponseStream());            //and read the response            string responseData = responseReader.ReadToEnd();            responseReader.Close();            string result = Server.UrlDecode(responseData);            string[] arrResult = result.Split('&');            Hashtable htResponse = new Hashtable();            string[] responseItemArray;            foreach (string responseItem in arrResult)            {                responseItemArray = responseItem.Split('=');                htResponse.Add(responseItemArray[0], responseItemArray[1]);            }            string strAck = htResponse["ACK"].ToString();            if (strAck == "Success" || strAck == "SuccessWithWarning")            {                string strAmt = htResponse["AMT"].ToString();                string strCcy = htResponse["CURRENCYCODE"].ToString();                string strTransactionID = htResponse["TRANSACTIONID"].ToString();                //ordersDataSource.InsertParameters["TransactionID"].DefaultValue = strTransactionID;                string strSuccess = "Thank you, your order for: $" + strAmt + " " + strCcy + " has been processed.";                //successLabel.Text = strSuccess;                Response.Write(strSuccess.ToString());            }            else            {                string strErr = "Error: " + htResponse["L_LONGMESSAGE0"].ToString();                string strErrcode = "Error code: " + htResponse["L_ERRORCODE0"].ToString();                //errLabel.Text = strErr;                //errcodeLabel.Text = strErrcode;                Response.Write(strErr.ToString() + "<br/>" + strErrcode.ToString());                return;            }        }        catch (Exception ex)        {            // do something to catch the error, like write to a log file.            Response.Write("error processing");        }    }}

    Read the article

  • How to deal with Warning : "Uncommittable transaction is detected at the end of the batch. The trans

    - by VishnuTiwariBlog
    Hi, If you are integrating with SQL Server and dealing with batch messages, you may encounter this problem. And this is evitable. The reason is the contention of resources. If your batch contains four messages and all the four messages have to be updated to SQL Server and then at the same time four process will contend for SQL server table and resources and the obvious result will be, few of your transaction will be left uncomitted and if you are not handling dehydration [not modifying the default property of the Dehydration] then your orchestration will dehydrate and will go for retry. If retry is set for every five minutes then after five minutes Port will send the message to the database. Reason for writing this post was as I did not want to see so many DEHYDRATED messages. And this was happening as Host Throttling was not set. Thus as soon as the BizTalk Process finds that SQL resources are unavailable it will go and dehydrate that process and process will go for retry. The contension of resources is unavoidable though we can fine tune the Dehydration setting. If you increase the time that an orchestration can be blocked at a subscription before being dehydrated, possibly you will give more time BizTalk Engine to handle to SQL resource availability. At least I solve the problem by fine tuning the Dehydration properties. Below is the section of config info which you need to add to the BTSNTsvc.exe.config.   <?xml version="1.0" ?> <configuration>        <configSections>               <section name="xlangs" type="Microsoft.XLANGs.BizTalk.CrossProcess.XmlSerializationConfigurationSectionHandler, Microsoft.XLANGs.BizTalk.CrossProcess" />        </configSections>        <runtime>               <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">                      <probing privatePath="BizTalk Assemblies;Developer Tools;Tracking" />               </assemblyBinding>        </runtime>        <xlangs>               <Configuration>                      <Dehydration MaxThreshold="1800" MinThreshold="1" ConstantThreshold="-1">                             <VirtualMemoryThrottlingCriteria OptimalUsage="900" MaximalUsage="1300" IsActive="true" />                             <PrivateMemoryThrottlingCriteria OptimalUsage="50" MaximalUsage="350" IsActive="true" />                             <PhysicalMemoryThrottlingCriteria OptimalUsage="50" MaximalUsage="350" IsActive="false" />                      </Dehydration>               </Configuration>        </xlangs> </configuration>

    Read the article

  • Caching items in Orchard

    - by Bertrand Le Roy
    Orchard has its own caching API that while built on top of ASP.NET's caching feature adds a couple of interesting twists. In addition to its usual work, the Orchard cache API must transparently separate the cache entries by tenant but beyond that, it does offer a more modern API. Here's for example how I'm using the API in the new version of my Favicon module: _cacheManager.Get( "Vandelay.Favicon.Url", ctx => { ctx.Monitor(_signals.When("Vandelay.Favicon.Changed")); var faviconSettings = ...; return faviconSettings.FaviconUrl; }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } There is no need for any code to test for the existence of the cache entry or to later fill that entry. Seriously, how many times have you written code like this: var faviconUrl = (string)cache["Vandelay.Favicon.Url"]; if (faviconUrl == null) { faviconUrl = ...; cache.Add("Vandelay.Favicon.Url", faviconUrl, ...); } Orchard's cache API takes that control flow and internalizes it into the API so that you never have to write it again. Notice how even casting the object from the cache is no longer necessary as the type can be inferred from the return type of the Lambda. The Lambda itself is of course only hit when the cache entry is not found. In addition to fetching the object we're looking for, it also sets up the dependencies to monitor. You can monitor anything that implements IVolatileToken. Here, we are monitoring a specific signal ("Vandelay.Favicon.Changed") that can be triggered by other parts of the application like so: _signals.Trigger("Vandelay.Favicon.Changed"); In other words, you don't explicitly expire the cache entry. Instead, something happens that triggers the expiration. Other implementations of IVolatileToken include absolute expiration or monitoring of the files under a virtual path, but you can also come up with your own.

    Read the article

  • What are these errors when I try to "make" the driver of my wireless network?

    - by Tom Brito
    I got got a wireless to usb adapter, and I'm having some trouble to install the drivers on Ubuntu. First of all, the readme file say to use the "make" command, and I already got errors: $ make make[1]: Entering directory `/usr/src/linux-headers-2.6.35-22-generic' CC [M] /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.o /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c: In function ‘rtl8192_usb_probe’: /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12325: error: ‘struct net_device’ has no member named ‘open’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12326: error: ‘struct net_device’ has no member named ‘stop’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12327: error: ‘struct net_device’ has no member named ‘tx_timeout’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12328: error: ‘struct net_device’ has no member named ‘do_ioctl’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12329: error: ‘struct net_device’ has no member named ‘set_multicast_list’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12330: error: ‘struct net_device’ has no member named ‘set_mac_address’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12331: error: ‘struct net_device’ has no member named ‘get_stats’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12332: error: ‘struct net_device’ has no member named ‘hard_start_xmit’ make[2]: *** [/home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.o] Error 1 make[1]: *** [_module_/home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-2.6.35-22-generic' make: *** [all] Error 2 /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/ is the path where I copied the drivers on my computer. Any idea how to solve this? (I don't even know what the error is...) update: sudo lshw -class network *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:01:00.0 logical name: eth0 version: 03 serial: 78:e3:b5:e7:5f:6e size: 10MB/s capacity: 1GB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half latency=0 link=no multicast=yes port=MII speed=10MB/s resources: irq:42 ioport:d800(size=256) memory:fbeff000-fbefffff memory:faffc000-faffffff memory:fbec0000-fbedffff *-network DISABLED description: Wireless interface physical id: 2 logical name: wlan0 serial: 00:26:18:a1:ae:64 capabilities: ethernet physical wireless configuration: broadcast=yes multicast=yes wireless=802.11b/g

    Read the article

  • what's wrong with my sitemap?

    - by cadrian
    Google Webmaster Tools says that my sitemap is in a wrong format. I don't see my mistake, I think I followed every guideline provided by Google. Can someone help me? <?xml version="1.0" encoding="UTF-8"?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <url> <loc>http://www.vocalcontraste.fr/</loc> </url> <url> <loc>http://www.vocalcontraste.fr/presentation/</loc> <lastmod>2012-01-30T21:49:06+01:00</lastmod> </url> <url> <loc>http://www.vocalcontraste.fr/les-concerts/</loc> <lastmod>2012-12-13T21:55:00+01:00</lastmod> </url> <url> <loc>http://www.vocalcontraste.fr/ecoutez-contraste/</loc> <lastmod>2012-07-13T18:19:45+01:00</lastmod> </url> <url> <loc>http://www.vocalcontraste.fr/repertoire/</loc> <lastmod>2012-07-13T17:30:14+01:00</lastmod> </url> <url> <loc>http://www.vocalcontraste.fr/la-presse/</loc> <lastmod>2012-07-11T08:17:48+01:00</lastmod> </url> <url> <loc>http://www.vocalcontraste.fr/recrutement/</loc> <lastmod>2012-02-01T22:22:03+01:00</lastmod> </url> <url> <loc>http://www.vocalcontraste.fr/nos-partenaires/</loc> <lastmod>2012-07-11T07:43:31+01:00</lastmod> </url> <url> <loc>http://www.vocalcontraste.fr/contactez-nous/</loc> <lastmod>2012-02-02T19:01:32+01:00</lastmod> </url> </urlset>

    Read the article

  • Profiling Startup Of VS2012 &ndash; Ants Profiler

    - by Alois Kraus
    I just downloaded ANTS Profiler 7.4 to check how fast it is and how deep I can analyze the startup of Visual Studio 2012. The Pro version which is useful does cost 445€ which is ok. To measure a complex system I decided to simply profile VS2012 (Update 1) on my older Intel 6600 2,4GHz with 3 GB RAM and a 32 bit Windows 7. Ants Profiler is really easy to use. So lets try it out. The Ants Profiler does want to start the profiled application by its own which seems to be rather common. I did choose Method Level timing of all managed methods. In the configuration menu I did want to get all call stacks to get full details. Once this is configured you are ready to go.   After that you can select the Method Grid to view Wall Clock Time in ms. I hate percentages which are on by default because I do want to look where absolute time is spent and not something else.   From the Method Grid I can drill down to see where time is spent in a nice and I can look at the decompiled methods where the time is spent. This does really look nice. But did you see the size of the scroll bar in the method grid? Although I wanted all call stacks I do get only about 4 pages of methods to drill down. From the scroll bar count I would guess that the profiler does show me about 150 methods for the complete VS startup. This is nonsense. I will never find a bottleneck in VS when I am presented only a fraction of the methods that were actually executed. I have also tried in the configuration window to also profile the extremely trivial functions but there was no noticeable difference. It seems that the Ants Profiler does filter away way too many details to be useful for bigger systems. If you want to optimize a CPU bound operation inside NUnit then Ants Profiler is with its line level timings a very nice tool to work with. But for bigger stuff it is certainly not usable. I also do not like that I must start the profiled application from the profiler UI. This makes it hard to profile processes which are started by some other process. Next: JetBrains dotTrace

    Read the article

  • Problems uploading package to launchpad

    - by user74513
    I'm having a lot of problems uploading my showdown project to a PPA. I've setup correctly PGP keys and my public ssh key to launchpad. I've packaged with debuild my C++ project, producing a source package lintian gave me only those two warnings that I think are ok for the showdown rules: W: massren source: native-package-with-dash-version W: massren source: binary-nmu-debian-revision-in-source 1.0-0extras12.04.1~ppa2 Producing a binary package works to and the package installs without problem on my ubuntu 12.04 machine, I only have a few more lintian warnings about the fact I'm installing in /opt/extras.ubuntu.com/ I'm uploading with: dput ppa:gabrielegreco/massren massren_1.0-0extras12.04.1~ppa2_source.changes When I upload with dput I have no errors, signatures seems ok, and public key seems accepted to (since the upload goes on without asking passwords...): dput ppa:gabrielegreco/massren massren_1.0-0extras12.04.1~ppa2_source.changes Checking signature on .changes gpg: Signature made Mon 02 Jul 2012 10:00:38 AM CEST using RSA key ID 49982576 gpg: Good signature from "Gabriele Greco " Good signature on /home/gabry/no-backup/massren_1.0-0extras12.04.1~ppa2_source.changes. Checking signature on .dsc gpg: Signature made Mon 02 Jul 2012 10:00:33 AM CEST using RSA key ID 49982576 gpg: Good signature from "Gabriele Greco " Good signature on /home/gabry/no-backup/massren_1.0-0extras12.04.1~ppa2.dsc. Uploading to ppa (via ftp to ppa.launchpad.net): Uploading massren_1.0-0extras12.04.1~ppa2.dsc: done. Uploading massren_1.0-0extras12.04.1~ppa2.tar.gz: done. Uploading massren_1.0-0extras12.04.1~ppa2_source.changes: done. Successfully uploaded packages. At the moment I'm not receiving responses from launchpad site, but the upload does not show in the ppa page. Previous attempts gave me response e-mails with different kind of errors: File massren_1.0-0extras12.04.1~ppa1.tar.gz mentioned in the changes has a checksum mismatch. 1503fa155226cbc4aba2f8ba9aa11a75 != 294a5e0caf3fe95b0b007a10766e9672 File massren_1.0-0extras12.04.1~ppa1.tar.gz mentioned in the changes has a checksum mismatch. 1503fa155226cbc4aba2f8ba9aa11a75 != 294a5e0caf3fe95b0b007a10766e9672 Or more cryptic: GPG verification of /srv/launchpad.net/ppa-queue/incoming/upload-ftp-20120629-163320-001135/~gabrielegreco/massren/ubuntu/massren_1.0-0extras12.04.1~ppa1.dsc failed: Verification failed 3 times: ["(7, 58, u'No data')", "(7, 58, u'No data')", "(7, 58, u'No data')"] Further error processing not possible because of a critical previous error. Any idea how can I solve this problem? I'm new to ubuntu packaging, so I may miss some step... There is an alternative to dput (aka manual upload)?

    Read the article

  • Designing a completly new database/gui solution for my compnay

    - by user1277304
    I'm no expert when it come to Everything Visual Studio 2010 and utilizing SQL server 2008. I'm sure some of my personal projects I've built for personal use would get laughed off the face of the planet, but SQLCe has been the solution I was looking for those home type of projects. And they work, flawlessly. Now I feel it's time to step up to the big league. I want to develop a complete, unified and module based solution for my compnay that I'm working for. We're still using stuff from the 80s for goodness sake! I use Excel and query the ancient database on my own because I can't stand the GUI. Nothing against people of age, but the IDE our programmers are using is from the stone age, and they use APL of all things with it. I've yet to see a radio buttton control anywhere in the GUI where it would make sense. Anyway, I want to do this right from the ground up. I'm by no means a newbie when it comes to programming in .NET 2010, however, I want the entire solution to be professionaly done. I want version control, test projects, project flow, SQL 2008 integration and all the bells and whistles that come with that. I know for a fact that if we had something like that runnning, not only would development costs and time be slashed four fold, but the possibilities for expansion and performance would sky rocket. (Between the GUI an our DB engine, it can only use ONE CORE! ONE! It's 2012 for goodness sake!) Our buisness is growing and our current ancient solution just can't keep up, and I'd hate to see our buisness go down in flames because our programmer is stuck in the 80's and refuses to use anything current. So I ask you guys, the experts and know-it-alls, where do I start? Are there any gems of good books out there in the haystack of all "This for dummies" type of deals? I already have several people backing me in this endevour, and while it may seem brash to just usurp the current programmers, I'm doing this for the company as a whole. Thank you guys for your time.

    Read the article

  • Lenovo W520 back usb port not working

    - by jaudette
    The usb port in the back of my laptop (on the right side when viewed from using perspective) is not working. Does anybody know if we can get this port working, and what the port number is. Here is my lsusb, if it can help. % lsusb Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 003 Device 002: ID 046d:c01e Logitech, Inc. MX518 Optical Mouse Bus 003 Device 003: ID 046d:c318 Logitech, Inc. Illuminated Keyboard Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 0765:5001 X-Rite, Inc. Huey PRO Colorimeter Bus 001 Device 004: ID 147e:2016 Upek Biometric Touchchip/Touchstrip Fingerprint Sensor Bus 001 Device 005: ID 0a5c:217f Broadcom Corp. Bluetooth Controller Bus 001 Device 006: ID 04f2:b217 Chicony Electronics Co., Ltd Lenovo Integrated Camera (0.3MP) Bus 002 Device 003: ID 17ef:1003 Lenovo Integrated Smart Card Reader I am running 12.10, upgraded from 12.04 but it did not work either in 12.04. The two usb ports on the left work just fine. EDIT: I just updated my bios from 1.32 to 1.39, no change in behaviour. The port does not even power up my devices. EDIT 2: Booted up windows, and the port is working. I went into the device manager and looked at the USB settings. I found my USB drive on Port 2, Hub 3, i just don't know how that relates to the Bus and Device numbers of linux. In windows, Smart card reader was on USB hub located at port 1 hub 2, fingerprint and bluetooth were on USB hub located at port 1 hub 1 EDIT 3: Went and looked at this SO post. Tried to look at my kern.log file with tail -f /var/log/kern.log. Got some activity when plugging/removing devices in other ports, but nothing happens when connecting a device into that port. It really looks disabled. Looked at my usb1-4 sys/bus/usb/devices/usb1/power/control and they are all set on auto. As expected, usb4 have version 3.00, the others (usb1-usb3) are 2.00.

    Read the article

  • It's possible to fulfill the social necessity of a human being through a social game in 3D like IMVU?

    - by Totty
    (I'm not advertising nor promoting this game, as it's just an example of my experience and I would like to have your opinion about the matter if possible) I've been started researching "things" about games and I've decided to begin to play IMVU as a friend of mine said it's cool. At first it seemed just another 3d social game, not so cool.. But I've "tried to like" and after 1 day I can say I'm addicted to it! Yes; I will explain better: About the game: You can go in chat-rooms, move to positions. Some positions are like sitting in a sofa, floor, dancing alone or with a partner, kissing and more in this way. In the free version of the game there is no nudity. You can even listen to music, view youtube... The 3d graphics are quite low end, so it's not as real as the paid PC games of today. About my experience: At first I was going with my friend in chat-rooms, they seemed very nice. There were people talking about general stuff, quite like in a real life. Well, I begin to know some girls (yes, virtual girls commanded by a real girl, I hope!). Things happened: Some girls are just crazy, not like in real life, they make out in before even talking; Other girls you can speak a little bit, then they add you to their friend-list. Sometimes they invite to their virtual places. Some girls have really IMVU boyfriends only (but not in reality) and most of them don't even make up in the game, so it's really a level of commitment involved here! But from what my friend told they last for him, at least, about 3 days... Some others have real and IMVU boyfriends that are the same. Until now I haven't find a girl with different boyfriend in the IMVU and reality. Nor multiple boyfriends. There are rooms where the same people find each selves every day and speak about general stuff, relationships and so on... They are nice with you, they "feel" you and show careness. This is what amazes me, they treat you like a real human being and as being their friend in the real world. (of course it's not always like this) There are jealous girls too and competitiveness between females lol, I know you loled! This is kind of social. So today I closed my door in my room and I've played it all day long and guess what, I didn't feel a need to stay with a real person at all. Normally, If I would stay a full day alone I would get quite crazy... So the question is: It's just me that seemed to be able to fulfill my social needs or there is something more? thanks for your precious time for reading my full question,

    Read the article

< Previous Page | 968 969 970 971 972 973 974 975 976 977 978 979  | Next Page >