Search Results

Search found 47301 results on 1893 pages for 'system matrix'.

Page 650/1893 | < Previous Page | 646 647 648 649 650 651 652 653 654 655 656 657  | Next Page >

  • I cannot install anything in ubuntu that has dependency problem

    - by phpGeek
    I wanted to install teamviewer on linux 64-bit system. What I did was to download teamviewer.deb file and install it as below: sudo dpkg -i install teamviewer.deb Then I wanted to correct dependency problem so I issued the following command: sudo apt-get install libc6:i386 libgcc1:i386 libasound2:i386 libfreetype6:i386 zlib1g:i386 libsm6:i386 libxdamage1:i386 libxext6:i386 libxfixes3:i386 libxrender1:i386 libxtst6:i386 I got the following error: E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. E: Unable to correct dependencies I then tried: sudo apt-get install -f Again I got the following error: E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. E: Unable to correct dependencies Even I tried to install gdebi, but I got the above error again. I emptied archives folder: sudo apt-get clean sudo apt-get update sudo apt-get upgrade Again I have problem installing my deb package. Is there anything I could do now to solve this problem? I've read the below article as well: Install Teamviewer using a 64-bits system but I get a dependency error EDIT: I found libperl5.14:amd64 as a broken package. I used: sudo apt-get remove libperl5.14:amd64 I got the following message: E: Unable to locate package Broken

    Read the article

  • CodePlex Daily Summary for Friday, June 14, 2013

    CodePlex Daily Summary for Friday, June 14, 2013Popular ReleasesBlackJumboDog: Ver5.9.1: 2013.06.13 Ver5.9.1 (1) Web??????SSI?#include???、CGI?????????????????????? (2) ???????????????????????????BrightstarDB: BrightstarDB 1.3.40613: This is the first "official" BrightstarDB release under the MIT open source license. The code base has been reworked to replace / remove the use of third-party closed-source tools and has been updated to use a patched version of dotNetRDF 1.0 that includes the most recent updates for SPARQL 1.1 and Turtle. We have also extended the core RDF API to support targeting a specific graph with an update or query operation and made a change to the core profiling code to disable it by default, leadin...Lakana - WPF Framework: Lakana V2.1 RTM: - Dynamic text localization - A new application wide message busPokemon Battle Online: ETV: ETV???2012?12??????,????,???????$/PBO/branches/PrivateBeta??。 ???????bug???????。 ???? Server??????,?????。 ?????????,?????????????,?????????。 ????????,????,?????????,???????????(??)??。 ???? ????????????。 ???????。 ???PP????,????????????????????PP????,??3。 ?????????????,??????????。 ???????? ??? ?? ???? ??? ???? ?? ?????????? ?? ??? ??? ??? ???????? ???? ???? ???????????????、???????????,??“???????”??。 ???bug ???Web Pages CMS: 0.5.0.5: Added empty media directoryModern UI for WPF: Modern UI 1.0.4: The ModernUI assembly including a demo app demonstrating the various features of Modern UI for WPF. Related downloads NuGet ModernUI for WPF is also available as NuGet package in the NuGet gallery, id: ModernUI.WPF Download Modern UI for WPF Templates A Visual Studio 2012 extension containing a collection of project and item templates for Modern UI for WPF. The extension includes the ModernUI.WPF NuGet package. DownloadToolbox for Dynamics CRM 2011: XrmToolBox (v1.2013.6.11): XrmToolbox improvement Add exception handling when loading plugins Updated information panel for displaying two lines of text Tools improvementMetadata Document Generator (v1.2013.6.10)New tool Web Resources Manager (v1.2013.6.11)Retrieve list of unused web resources Retrieve web resources from a solution All tools listAccess Checker (v1.2013.2.5) Attribute Bulk Updater (v1.2013.1.17) FetchXml Tester (v1.2013.3.4) Iconator (v1.2013.1.17) Metadata Document Generator (v1.2013.6.10) Privilege...Document.Editor: 2013.23: What's new for Document.Editor 2013.23: New Insert Emoticon support Improved Format support Minor Bug Fix's, improvements and speed upsChristoc's DotNetNuke Module Development Template: DotNetNuke 7 Project Templates V2.4 for VS2012: V2.4 - Release Date 6/10/2013 Items addressed in this 2.4 release Updated MSBuild Community Tasks reference to 1.4.0.61 Setting up your DotNetNuke Module Development Environment Installing Christoc's DotNetNuke Module Development Templates Customizing the latest DotNetNuke Module Development Project TemplatesLayered Architecture Sample for .NET: Leave Sample - June 2013 (for .NET 4.5): Thank You for downloading Layered Architecture Sample. Please read the accompanying README.txt file for setup and installation instructions. This is the first set of a series of revised samples that will be released to illustrate the layered architecture design pattern. This version is only supported on Visual Studio 2012. This samples illustrates the use of ASP.NET Web Forms, ASP.NET Model Binding, Windows Communications Foundation (WCF), Windows Workflow Foundation (WF) and Microsoft Ente...Papercut: Papercut 2013-6-10: Feature: Shows From, To, Date and Subject of Email. Feature: Async UI and loading spinner. Enhancement: Improved speed when loading large attachments. Enhancement: Decoupled SMTP server into secondary assembly. Enhancement: Upgraded to .NET v4. Fix: Messages lost when received very fast. Fix: Email encoding issues on display/Automatically detect message Encoding Installation Note:Installation is copy and paste. Incoming messages are written to the start-up directory of Papercut. If you do n...Supporting Guidance and Whitepapers: v1.BETA Unit test Generator Documentation: Welcome to the Unit Test Generator Once you’ve moved to Visual Studio 2012, what’s a dev to do without the Create Unit Tests feature? Based on the high demand on User Voice for this feature to be restored, the Visual Studio ALM Rangers have introduced the Unit Test Generator Visual Studio Extension. The extension adds the “create unit test” feature back, with a focus on automating project creation, adding references and generating stubs, extensibility, and targeting of multiple test framewor...MapWindow 4: MapWindow GIS v4.8.8 - Release Candidate - 32Bit: Download the release notes here: http://svn.mapwindow.org/svnroot/MapWindow4Dev/Bin/MapWindowNotes.rtfLINQ to Twitter: LINQ to Twitter v2.1.06: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.VR Player: VR Player 0.3.1 ALPHA: New plugin system with individual folders TrackIR support Maya and 3ds max formats support Dual screen support Mono layouts (left and right) Cylinder height parameter Barel effect factor parameter Razer hydra filter parameter VRPN bug fixes UI improvements Performances improvements Stabilization and logging with Log4Net New default values base on users feedback CTRL key to open menuSimCityPak: SimCityPak 0.1.0.8: SimCityPak 0.1.0.8 New features: Import BMP color palettes for vehicles Import RASTER file (uncompressed 8.8.8.8 DDS files) View different channels of RASTER files or preview of all layers combined Find text in javascripts TGA viewer Ground textures added to lot editor Many additional identified instances and propertiesWsus Package Publisher: Release v1.2.1306.09: Add more verifications on certificate validation. WPP will not let user to try publishing an update until the certificate is valid. Add certificate expiration date on the 'About' form. Filter Approbation to avoid a user to try to approve an update for uninstallation when the update do not support uninstallation. Add the server and console version on the 'About' form. WPP will not let user to publish an update until the server and console are not at the same level. WPP do not let user ...AJAX Control Toolkit: June 2013 Release: AJAX Control Toolkit Release Notes - June 2013 Release Version 7.0607June 2013 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - Instructions for using the AJAX Control Toolkit with ASP.NET 4.5 can be found at...Rawr: Rawr 5.2.1: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...VG-Ripper & PG-Ripper: PG-Ripper 1.4.13: changes NEW: Added Support for "ImageJumbo.com" links FIXED: Ripping of Threads with multiple pagesNew ProjectsControl de stock: sistema para control de stockEmails Outlook Mac Recovery Software That Is Provenly Better Than Others: Recover OLM Emails with Outlook Mac Recovery Software that restore Mac OLM files as well as Convert OLM files in EML and DBX file format. Horarios e Disciplinas: O projeto de Grade Horária do CEPE é desenvolvido por um analista programador experiente e dois estagiários ciosos de conhecimento nas linguagens C#, Razor, Entity Framework, AJAX.Hyper-v VHost and VM Inventory Reports: Get a detailed inventory/report from your VHost and it's VMs. Powershell script that dumps the report as a test file.jean0614changbranch: ddjean0614jabbrchangebranch: dLandscape: Planned Maintenance System, is a system being used for any fleet of assets where a regular maintenance is required.MongoCamp: MongoCamp for MongoDBNo-nonsense Flickr Uploader: Simple, easy to-use, stable, flickr uploader for large batch uploads.Outlook PST Converter to Export, Convert & Save PST in Other Formats: Outlook PST Converter to export Outlook emails in other formats from PST format which will be a better option to change format of Outlook PST file to PDF, MSG.Penrose Tiles: A simple windows forms based penrose tile generator.Ranch Master - an Object Oriented Progamming 2013 College Project: Raise the sheep by maintaining the Hunger & Thirst level low, keep the condition to 'Healthy', collect wools produced, trade the wools into points.SH Demo App: Summary!Stock Right: Initial releaseTesting Project: testingtestjabbr0613: testUnits of Measure: Simple solution to cope with units of measure in C# applications. You can develop your own unit system, be it SI-based or anything else that suits your needs.User Friendly Registration Plugin for DotNetNuke: This plugin makes DotNetNuke registration process more user friendly. Main idea is to inform user about possible mistakes in the registration form immediately,

    Read the article

  • How do I classify using SVM Classifier in Matlab?

    - by Gomathi
    I'm on a project of liver tumor segmentation and classification. I used Region Growing and FCM for liver and tumor segmentation respectively. Then, I used Gray Level Co-occurence matrix for texture feature extraction. I have to use Support Vector Machine for Classification. But I don't know how to normalize the feature vectors. Can anyone tell how to program it in Matlab? To the GLCM program, I gave the tumor segmented image as input. Was I correct? If so, I think, then, my output will also be correct. I gave the parameters exactly as in the example provided in the documentation itself. The output I obtained was stats = autoc: [1.857855266614132e+000 1.857955341199538e+000] contr: [5.103143332457753e-002 5.030548650257343e-002] corrm: [9.512661919561399e-001 9.519459060378332e-001] corrp: [9.512661919561385e-001 9.519459060378338e-001] cprom: [7.885631654779597e+001 7.905268525471267e+001] cshad: [1.219440700252286e+001 1.220659371449108e+001] dissi: [2.037387269065756e-002 1.935418927908687e-002] energ: [8.987753042491253e-001 8.988459843719526e-001] entro: [2.759187341212805e-001 2.743152140681436e-001] homom: [9.930016927881388e-001 9.935307908219834e-001] homop: [9.925660617240367e-001 9.930960070222014e-001] maxpr: [9.474275457490587e-001 9.474466930429607e-001] sosvh: [1.847174384255155e+000 1.846913030238459e+000] savgh: [2.332207337361002e+000 2.332108469591401e+000] svarh: [6.311174784234007e+000 6.314794324825067e+000] senth: [2.663144677055123e-001 2.653725436772341e-001] dvarh: [5.103143332457753e-002 5.030548650257344e-002] denth: [7.573115918713391e-002 7.073380266499811e-002] inf1h: [-8.199645492654247e-001 -8.265514568489666e-001] inf2h: [5.643539051044213e-001 5.661543271625117e-001] indnc: [9.980238521073823e-001 9.981394883569174e-001] idmnc: [9.993275086521848e-001 9.993404634013308e-001] The thing is, I run the program for three images. But all three gave me the same output. When I used graycoprops() stat = Contrast: 4.721877658740964e+005 Correlation: -3.282870417955449e-003 Energy: 8.647689474127760e-006 Homogeneity: 8.194621855726478e-003 stat = Contrast: 2.817160447307697e+004 Correlation: 2.113032196952781e-005 Energy: 4.124904827799189e-004 Homogeneity: 2.513567163994905e-002 stat = Contrast: 7.086638436309059e+004 Correlation: 2.459637878221028e-002 Energy: 4.640677159445994e-004 Homogeneity: 1.158305728309460e-002 The images are:

    Read the article

  • Asus N46VZ prouduces more heat when on ubuntu compared to windows

    - by Blaze Tama
    First, i just used ubuntu as my main operating system, so please bear with me. My Asus N46VZ temps is pretty high when im on ubuntu (13.10). The temps are 60C even though i just open a browser. This does not happened on windows, where the temps is just around 50C. I did some research and some people said that the problem might be on my graphic card. I have an intel HD 4000 and GT 650m. So i tried to check my system settings and it said im using "Intel® Ivybridge Mobile". So, i tried to check the "additional drivers" tab on software & update settings but i found nothing there. The point is, i still dont know what make my laptop "overheating". The VGA part is just my guess (and is still dont understand what is the correlation between my VGA and the heat, and everything except the temp seems fine). Any help is appreciated. Thanks for your help :D

    Read the article

  • Building Queries Systematically

    - by Jeremy Smyth
    The SQL language is a bit like a toolkit for data. It consists of lots of little fiddly bits of syntax that, taken together, allow you to build complex edifices and return powerful results. For the uninitiated, the many tools can be quite confusing, and it's sometimes difficult to decide how to go about the process of building non-trivial queries, that is, queries that are more than a simple SELECT a, b FROM c; A System for Building Queries When you're building queries, you could use a system like the following:  Decide which fields contain the values you want to use in our output, and how you wish to alias those fields Values you want to see in your output Values you want to use in calculations . For example, to calculate margin on a product, you could calculate price - cost and give it the alias margin. Values you want to filter with. For example, you might only want to see products that weigh more than 2Kg or that are blue. The weight or colour columns could contain that information. Values you want to order by. For example you might want the most expensive products first, and the least last. You could use the price column in descending order to achieve that. Assuming the fields you've picked in point 1 are in multiple tables, find the connections between those tables Look for relationships between tables and identify the columns that implement those relationships. For example, The Orders table could have a CustomerID field referencing the same column in the Customers table. Sometimes the problem doesn't use relationships but rests on a different field; sometimes the query is looking for a coincidence of fact rather than a foreign key constraint. For example you might have sales representatives who live in the same state as a customer; this information is normally not used in relationships, but if your query is for organizing events where sales representatives meet customers, it's useful in that query. In such a case you would record the names of columns at either end of such a connection. Sometimes relationships require a bridge, a junction table that wasn't identified in point 1 above but is needed to connect tables you need; these are used in "many-to-many relationships". In these cases you need to record the columns in each table that connect to similar columns in other tables. Construct a join or series of joins using the fields and tables identified in point 2 above. This becomes your FROM clause. Filter using some of the fields in point 1 above. This becomes your WHERE clause. Construct an ORDER BY clause using values from point 1 above that are relevant to the desired order of the output rows. Project the result using the remainder of the fields in point 1 above. This becomes your SELECT clause. A Worked Example   Let's say you want to query the world database to find a list of countries (with their capitals) and the change in GNP, using the difference between the GNP and GNPOld columns, and that you only want to see results for countries with a population greater than 100,000,000. Using the system described above, we could do the following:  The Country.Name and City.Name columns contain the name of the country and city respectively.  The change in GNP comes from the calculation GNP - GNPOld. Both those columns are in the Country table. This calculation is also used to order the output, in descending order To see only countries with a population greater than 100,000,000, you need the Population field of the Country table. There is also a Population field in the City table, so you'll need to specify the table name to disambiguate. You can also represent a number like 100 million as 100e6 instead of 100000000 to make it easier to read. Because the fields come from the Country and City tables, you'll need to join them. There are two relationships between these tables: Each city is hosted within a country, and the city's CountryCode column identifies that country. Also, each country has a capital city, whose ID is contained within the country's Capital column. This latter relationship is the one to use, so the relevant columns and the condition that uses them is represented by the following FROM clause:  FROM Country JOIN City ON Country.Capital = City.ID The statement should only return countries with a population greater than 100,000,000. Country.Population is the relevant column, so the WHERE clause becomes:  WHERE Country.Population > 100e6  To sort the result set in reverse order of difference in GNP, you could use either the calculation, or the position in the output (it's the third column): ORDER BY GNP - GNPOld or ORDER BY 3 Finally, project the columns you wish to see by constructing the SELECT clause: SELECT Country.Name AS Country, City.Name AS Capital,        GNP - GNPOld AS `Difference in GNP`  The whole statement ends up looking like this:  mysql> SELECT Country.Name AS Country, City.Name AS Capital, -> GNP - GNPOld AS `Difference in GNP` -> FROM Country JOIN City ON Country.Capital = City.ID -> WHERE Country.Population > 100e6 -> ORDER BY 3 DESC; +--------------------+------------+-------------------+ | Country            | Capital    | Difference in GNP | +--------------------+------------+-------------------+ | United States | Washington | 399800.00 | | China | Peking | 64549.00 | | India | New Delhi | 16542.00 | | Nigeria | Abuja | 7084.00 | | Pakistan | Islamabad | 2740.00 | | Bangladesh | Dhaka | 886.00 | | Brazil | Brasília | -27369.00 | | Indonesia | Jakarta | -130020.00 | | Russian Federation | Moscow | -166381.00 | | Japan | Tokyo | -405596.00 | +--------------------+------------+-------------------+ 10 rows in set (0.00 sec) Queries with Aggregates and GROUP BY While this system might work well for many queries, it doesn't cater for situations where you have complex summaries and aggregation. For aggregation, you'd start with choosing which columns to view in the output, but this time you'd construct them as aggregate expressions. For example, you could look at the average population, or the count of distinct regions.You could also perform more complex aggregations, such as the average of GNP per head of population calculated as AVG(GNP/Population). Having chosen the values to appear in the output, you must choose how to aggregate those values. A useful way to think about this is that every aggregate query is of the form X, Y per Z. The SELECT clause contains the expressions for X and Y, as already described, and Z becomes your GROUP BY clause. Ordinarily you would also include Z in the query so you see how you are grouping, so the output becomes Z, X, Y per Z.  As an example, consider the following, which shows a count of  countries and the average population per continent:  mysql> SELECT Continent, COUNT(Name), AVG(Population)     -> FROM Country     -> GROUP BY Continent; +---------------+-------------+-----------------+ | Continent     | COUNT(Name) | AVG(Population) | +---------------+-------------+-----------------+ | Asia          |          51 |   72647562.7451 | | Europe        |          46 |   15871186.9565 | | North America |          37 |   13053864.8649 | | Africa        |          58 |   13525431.0345 | | Oceania       |          28 |    1085755.3571 | | Antarctica    |           5 |          0.0000 | | South America |          14 |   24698571.4286 | +---------------+-------------+-----------------+ 7 rows in set (0.00 sec) In this case, X is the number of countries, Y is the average population, and Z is the continent. Of course, you could have more fields in the SELECT clause, and  more fields in the GROUP BY clause as you require. You would also normally alias columns to make the output more suited to your requirements. More Complex Queries  Queries can get considerably more interesting than this. You could also add joins and other expressions to your aggregate query, as in the earlier part of this post. You could have more complex conditions in the WHERE clause. Similarly, you could use queries such as these in subqueries of yet more complex super-queries. Each technique becomes another tool in your toolbox, until before you know it you're writing queries across 15 tables that take two pages to write out. But that's for another day...

    Read the article

  • Radeon HD 6850 VMware 3D Support?

    - by Matt
    I'm a new Ubuntu user (new to all of linux actually). I've installed Ubuntu 11.10 x64 and have been enjoying it, but I wanted to see how it would perform using VMware for small time gaming since I find dual booting too much of a nuisance to even bother using Ubuntu at all (sorry!). I have an Asus EAH6850 DirectCU Radeon HD 6850 graphics card and I've installed the additional ATI/AMD proprietary FGLRX graphics driver, but when I open a Windows XP 32bit machine I installed through VMware, I get this message: "The GPU driver currently installed on this host may cause issues with VMware products. If you notice any issues please disable the 3D support in the affected virtual machines." I still have 3D capabilities in the VM but they are very very choppy even running the DX tests (the spinning cube). I've seen people on youtube and other forums saying that since the new 3D acceleration in VMware 8 gaming is very possible through VMs (and I've seen them running the DX tests with the spinning cube very smoothly). I'm wondering if my graphics card isn't fully supported or if I have installed it wrong. Also when I check system info (on the host Ubuntu machine) it says "Graphics VESA:BARTS" Should my Radeon HD 6850 be showing up there? The rest of my basic system info i5 2500k 8GB 1600MHz memory Guest is running with access to all 4 cores of processor and 3gb memory assigned.

    Read the article

  • Direct2D Transform

    - by James
    I have a beginner question about Direct2D transforms. I have a 20 x 10 bitmap that I would like to draw in different orientations. To start, I would like to draw it vertically with a destination rectangle of say: (left, top, right, bottom) (300, 300, 310, 320) The bitmap is wider than it is tall (20 x 10), but when I draw it vertically, it will be appear taller than it is wide (10 x 20). I know that I can use a rotation matrix like so: m_pRenderTarget->SetTransform( D2D1::Matrix3x2F::Rotation( 90.0f, D2D1::Point2F(<center of shape>)) ); But when I use this method to rotate my shape, the destination rectangle is still wider than it is tall. Maybe it would look something like this: (left, top, right, bottom) (280, 290, 300, 300) The destination rectangle is 20 x 10 but the bitmap appears on the screen as 10 x 20. I can't look at the destination rectangle in the debugger and compare it to: (left, top, right, bottom) (300, 300, 310, 320) Is there any simple way to say "I want to rotate it so that the image is rendered to exactly this destination rectangle after the rotation?" In this case, I would like to say "Please rotate the bitmap so that it appears on the screen at this location:" (left, top, right, bottom) (300, 300, 310, 320) If I can't do that, is there any way to find out the 10 x 20 destination rectangle where the bitmap is actually being rendered to the screen?

    Read the article

  • Meet the New Windows Azure

    - by ScottGu
    Today we are releasing a major set of improvements to Windows Azure.  Below is a short-summary of just a few of them: New Admin Portal and Command Line Tools Today’s release comes with a new Windows Azure portal that will enable you to manage all features and services offered on Windows Azure in a seamless, integrated way.  It is very fast and fluid, supports filtering and sorting (making it much easier to use for large deployments), works on all browsers, and offers a lot of great new features – including built-in VM, Web site, Storage, and Cloud Service monitoring support. The new portal is built on top of a REST-based management API within Windows Azure – and everything you can do through the portal can also be programmed directly against this Web API. We are also today releasing command-line tools (which like the portal call the REST Management APIs) to make it even easier to script and automate your administration tasks.  We are offering both a Powershell (for Windows) and Bash (for Mac and Linux) set of tools to download.  Like our SDKs, the code for these tools is hosted on GitHub under an Apache 2 license. Virtual Machines Windows Azure now supports the ability to deploy and run durable VMs in the cloud.  You can easily create these VMs using a new Image Gallery built-into the new Windows Azure Portal, or alternatively upload and run your own custom-built VHD images. Virtual Machines are durable (meaning anything you install within them persists across reboots) and you can use any OS with them.  Our built-in image gallery includes both Windows Server images (including the new Windows Server 2012 RC) as well as Linux images (including Ubuntu, CentOS, and SUSE distributions).  Once you create a VM instance you can easily Terminal Server or SSH into it in order to configure and customize the VM however you want (and optionally capture your own image snapshot of it to use when creating new VM instances).  This provides you with the flexibility to run pretty much any workload within Windows Azure.   The new Windows Azure Portal provides a rich set of management features for Virtual Machines – including the ability to monitor and track resource utilization within them.  Our new Virtual Machine support also enables the ability to easily attach multiple data-disks to VMs (which you can then mount and format as drives).  You can optionally enable geo-replication support on these – which will cause Windows Azure to continuously replicate your storage to a secondary data-center at least 400 miles away from your primary data-center as a backup. We use the same VHD format that is supported with Windows virtualization today (and which we’ve released as an open spec), which enables you to easily migrate existing workloads you might already have virtualized into Windows Azure.  We also make it easy to download VHDs from Windows Azure, which also provides the flexibility to easily migrate cloud-based VM workloads to an on-premise environment.  All you need to do is download the VHD file and boot it up locally, no import/export steps required. Web Sites Windows Azure now supports the ability to quickly and easily deploy ASP.NET, Node.js and PHP web-sites to a highly scalable cloud environment that allows you to start small (and for free) and then scale up as your traffic grows.  You can create a new web site in Azure and have it ready to deploy to in under 10 seconds: The new Windows Azure Portal provides built-in administration support for Web sites – including the ability to monitor and track resource utilization in real-time: You can deploy to web-sites in seconds using FTP, Git, TFS and Web Deploy.  We are also releasing tooling updates today for both Visual Studio and Web Matrix that enable developers to seamlessly deploy ASP.NET applications to this new offering.  The VS and Web Matrix publishing support includes the ability to deploy SQL databases as part of web site deployment – as well as the ability to incrementally update database schema with a later deployment. You can integrate web application publishing with source control by selecting the “Set up TFS publishing” or “Set up Git publishing” links on a web-site’s dashboard: Doing do will enable integration with our new TFS online service (which enables a full TFS workflow – including elastic build and testing support), or create a Git repository that you can reference as a remote and push deployments to.  Once you push a deployment using TFS or Git, the deployments tab will keep track of the deployments you make, and enable you to select an older (or newer) deployment and quickly redeploy your site to that snapshot of the code.  This provides a very powerful DevOps workflow experience.   Windows Azure now allows you to deploy up to 10 web-sites into a free, shared/multi-tenant hosting environment (where a site you deploy will be one of multiple sites running on a shared set of server resources).  This provides an easy way to get started on projects at no cost. You can then optionally upgrade your sites to run in a “reserved mode” that isolates them so that you are the only customer within a virtual machine: And you can elastically scale the amount of resources your sites use – allowing you to increase your reserved instance capacity as your traffic scales: Windows Azure automatically handles load balancing traffic across VM instances, and you get the same, super fast, deployment options (FTP, Git, TFS and Web Deploy) regardless of how many reserved instances you use. With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need. Cloud Services and Distributed Caching Windows Azure also supports the ability to build cloud services that support rich multi-tier architectures, automated application management, and scale to extremely large deployments.  Previously we referred to this capability as “hosted services” – with this week’s release we are now referring to this capability as “cloud services”.  We are also enabling a bunch of new features with them. Distributed Cache One of the really cool new features being enabled with cloud services is a new distributed cache capability that enables you to use and setup a low-latency, in-memory distributed cache within your applications.  This cache is isolated for use just by your applications, and does not have any throttling limits. This cache can dynamically grow and shrink elastically (without you have to redeploy your app or make code changes), and supports the full richness of the AppFabric Cache Server API (including regions, high availability, notifications, local cache and more).  In addition to supporting the AppFabric Cache Server API, it also now supports the Memcached protocol – allowing you to point code written against Memcached at it (no code changes required). The new distributed cache can be setup to run in one of two ways: 1) Using a co-located approach.  In this option you allocate a percentage of memory in your existing web and worker roles to be used by the cache, and then the cache joins the memory into one large distributed cache.  Any data put into the cache by one role instance can be accessed by other role instances in your application – regardless of whether the cached data is stored on it or another role.  The big benefit with the “co-located” option is that it is free (you don’t have to pay anything to enable it) and it allows you to use what might have been otherwise unused memory within your application VMs. 2) Alternatively, you can add “cache worker roles” to your cloud service that are used solely for caching.  These will also be joined into one large distributed cache ring that other roles within your application can access.  You can use these roles to cache 10s or 100s of GBs of data in-memory very effectively – and the cache can be elastically increased or decreased at runtime within your application: New SDKs and Tooling Support We have updated all of the Windows Azure SDKs with today’s release to include new features and capabilities.  Our SDKs are now available for multiple languages, and all of the source in them is published under an Apache 2 license and and maintained in GitHub repositories. The .NET SDK for Azure has in particular seen a bunch of great improvements with today’s release, and now includes tooling support for both VS 2010 and the VS 2012 RC. We are also now shipping Windows, Mac and Linux SDK downloads for languages that are offered on all of these systems – allowing developers to develop Windows Azure applications using any development operating system. Much, Much More The above is just a short list of some of the improvements that are shipping in either preview or final form today – there is a LOT more in today’s release.  These include new Virtual Private Networking capabilities, new Service Bus runtime and tooling support, the public preview of the new Azure Media Services, new Data Centers, significantly upgraded network and storage hardware, SQL Reporting Services, new Identity features, support within 40+ new countries and territories, and much, much more. You can learn more about Windows Azure and sign-up to try it for free at http://windowsazure.com.  You can also watch a live keynote I’m giving at 1pm June 7th (later today) where I’ll walk through all of the new features.  We will be opening up the new features I discussed above for public usage a few hours after the keynote concludes.  We are really excited to see the great applications you build with them. Hope this helps, Scott

    Read the article

  • Access Violation when trying to bind Vertex Object Array

    - by Paul
    I've just started digging into OpenGL and I've run into a problem trying to set a VOA. It's giving me a run-time error of : An unhandled exception of type 'System.AccessViolationException' At // Create and bind a VAO GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); I have searched the internet high and low for a solution and I haven't found one. The rest of my function looks like this: int main(array<System::String ^> ^args) { // Initialise GLFW if( !glfwInit() ) { fprintf( stderr, "Failed to initialize GLFW\n" ); return -1; } glfwOpenWindowHint(GLFW_FSAA_SAMPLES, 0); // 4x antialiasing glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR, 3); // We want OpenGL 3.3 glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR, 3); glfwOpenWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); //We don't want the old OpenGL // Open a window and create its OpenGL context if( !glfwOpenWindow( 800, 600, 0,0,0,0, 32,0, GLFW_WINDOW ) ) { fprintf( stderr, "Failed to open GLFW window\n" ); glfwTerminate(); return -1; } // Initialize GLEW if (glewInit() != GLEW_OK) { fprintf(stderr, "Failed to initialize GLEW\n"); return -1; } glfwSetWindowTitle( "Game Engine" ); // Create and bind a VAO GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); glfwEnable( GLFW_STICKY_KEYS );

    Read the article

  • How to connect two ubuntu computers with ethernet cable

    - by Lukasz Zaroda
    I'm trying to connect with ethernet cable two computers - desktop and laptop. What I want to do is transfer a lot of data from one to another. The problem is that I'm doing everything from: How to network two Ubuntu computers using ethernet (without a router)? But after that, ping always gives me "Destination host unreachable". I was searching a while but couldn't figure out what is a reason it doesn't work, maybe it's something about my devices or maybe someone will have another idea. Ethernet cable I got with my router. There is a text printed on it: Aurit Data Cable Cat.5 UTP 26AWG 4PAIR AWM PUC 75°C EIA/TIA 568B It's connecting now my desktop to router, so I can send this question. My desktop: System: Ubuntu 12.04 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 03) "ethtool -i eth0" output: driver: r8169 version: 2.3LK-NAPI firmware-version: rtl_nic/rtl8168d-1.fw bus-info: 0000:01:00.0 supports-statistics: yes supports-test: no supports-eeprom-access: no supports-register-dump: yes My laptop: System: Ubuntu 14.04 Ethernet controller: Qualcomm Atheros AR8162 Fast Ethernet (rev 08) "ethtool -i eth0" output: driver: alx version: firmware-version: bus-info: 0000:01:00.0 supports-statistics: no supports-test: no supports-eeprom-access: no supports-register-dump: no supports-priv-flags: no My iptables are accepting everything. Any ideas why I cannot reach other computer?

    Read the article

  • Finding links among many written and spoken thoughts

    - by Peter Fren
    So... I am using a digital voice recorder to record anything I see important, ranging from business to private, from rants to new business ideas. Every finalized idea is one wma-file. I wrote a program to sort the wma-files into folders. From time to time I listen to the wma-files, convert them to text(manually) and insert them into a mindmap with mindmanager, which I sort hierarchically by area and type in turn. This works very well, no idea is being lost, when I am out of ideas for a special topic, I listen to what I said and can get started again. What could a search system look like that finds links between thoughts(written in the mindmap and in the wma files) or in general gives me good search results even when the keyword I searched for is not present but a synonym of it or related topic(for instance flower should output entries containing orchid aswell, even if they contain orchid but not the very keyword flower). I prefer something ready-made but small adjustments to a given system are fine aswell. How would you approach this task?

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-27

    - by Bob Rhubart
    Understanding Oracle BI 11g Security vs Legacy Oracle BI 10g | Christian Screen "After conducting a large amount of Oracle BI 10g to Oracle BI 11g upgrades and after writing the Oracle BI 11g book," says Oracle ACE Christian Screen, "I still continually get asked one of the most basic questions regarding security in Oracle BI 11g; How does it compare to Oracle BI 10g? The trail of questions typically goes on to what are the differences? And, how do we leverage our current Oracle BI 10g security table schema in Oracle BI 11g?" Process Oracle OER Events using a simple Web Service | Bob Webster Bob Webster's post "provides an example of a simple web service that processes Oracle Enterprise Repository (OER) Events. The service receives events from OER and utilizes the OER REX API to implement simple OER automations for selected event types." Oracle Fusion Middleware Security: Attaching OWSM policies to JRF-based web services clients | Andre Correa "OWSM (Oracle Web Services Manager) is Oracle's recommended method for securing SOAP web services," says Oracle Fusion Middleware A-Team member Andre Correa. "It provides agents that encapsulate the necessary logic to interact with the underlying software stack on both service and client sides. Such agents have their behavior driven by policies. OWSM ships with a bunch of policies that are adequate to most common real world scenarios." His detailed post shows how to make it happen. WebCenter Content (WCC) Trace Sections | ECM Architect ECM Architect Kevin Smith shares a detailed technical post covering WebCenter Content (WCC) Trace sections. Thought for the Day "A complex system that works is invariably found to have evolved from a simple system that worked." — John Gall Source: SoftwareQuotes.com

    Read the article

  • LibreOffice hangs everytime I click into it

    - by Freddy
    I've searched askubuntu for a solution but couldn't find anything helpful for me. I installed the latest version of ubuntu (13.10; 64-bit) yesterday (so I'm relatively new and unexperienced about it) and some programs as well. Today I wanted to try the LibreOffice package which comes with ubuntu by default. So I first started LO Impress, it showed me the default slide where it says something like "Add your title/text here" (I don't know exactly, I'm using my system in German). As soon as i clicked on this text, my entire system didn't react, I couldn't even click the stuff in the upper right corner (where the date, shutdown functions etc. are), nor the items on the launcher bar on the left. After 10 seconds or so, everything seemed to work properly again. But when I clicked into the LO Impress again, the same procedure as described happened again (cmpletely stuck for about 10 sec). Anybody got a solution for this? If you need more information, just tell me. Thanks in advance!

    Read the article

  • Deferred Rendering With Diffuse,Specular, and Normal maps

    - by John
    I have been reading up on deferred rendering and I am trying to implement a renderer using the Sponza atrium model, which can be found here, as my sandbox.Note I am also using OpenGL 3.3 and GLSL. I am loading the model from a Wavefront OBJ file using Assimp. I extract all geometry information including tangents and bitangents. For all the aiMaterials,I extract the following information which essentially comes from the sponza.mtl file. Ambient/Diffuse/Specular/Emissive Reflectivity Coefficients(Ka,Kd,Ks,Ke) Shininess Diffuse Map Specular Map Normal Map I understand that I must render vertex attributes such as position ,normals,texture coordinates to textures as well as depth for the second render pass. A lot of resources mention putting colour information into a g-buffer in the initial render pass but do you not require the diffuse,specular and normal maps and therefore lights to determine the fragment colour? I know that doesnt make since sense because lighting should be done in the second render pass. In terms of normal mapping, do you essentially just pass the tangent,bitangents, and normals into g-buffers and then construct the tangent matrix and apply it to the sampled normal from the normal map. Ultimately, I would like to know how to incorporate this material information into my deferred renderer.

    Read the article

  • Oracle Financial Management Analytics 11.1.2.2.300 is available

    - by THE
    (guest post by Greg) Oracle Financial Management Analytics 11.1.2.2.300 is now available for download from My Oracle Support as Patch 15921734 New Features in this release: Support for the new Oracle BI mobile HD iPad client. New Account Reconciliation Management and Financial Data Quality Management analytics Improved Hyperion Financial Management analytics and usability enhancements Enhanced Configuration Utility to support multiple products. For HFM, FCM or ARM, and FDM, we support both Oracle and Microsoft SQL Server database. Simplified Test to Production migration of OFMA. Web browsers support for Oracle Financial Management Analytics: Internet Explorer Version 9 - The Oracle Financial Management Analytics supports the Internet Explorer 9 Web browser (for both 32 and 64 bit). Firefox Version 6.x - The Oracle Financial Management Analytics supports the Firefox 6.x Web browser. Chrome Version 12.x - The Oracle Financial Management Analytics supports the Chrome 12.x Web browser. See OBIEE Certification Matrix 11.1.1.6:  http://www.oracle.com/technetwork/middleware/ias/downloads/fusion-certification-100350.html Oracle Financial Management Analytics Compatibility: The Oracle Financial Management Analytics supports the following product version: Oracle Hyperion Financial Data Quality Management Release 11.1.2.2.300 Oracle Financial Close Manager Release 11.1.2.2.300 Oracle Hyperion Financial Management Release 11.1.2.2.300  

    Read the article

  • What is the advantage to using a factor of 1024 instead of 1000 for disk size units?

    - by Joe Z.
    When considering the disk space of a storage medium, normally the computer or operating system will represent it in terms of powers of 1024 - a kilobyte is 1,024 bytes, a megabyte is 1,048,576 bytes, a gigabyte is 1,073,741,824 bytes, and so on. But I don't see any practical reason why this convention was adopted. Usually when disk size is represented in kilo-, mega-, or giga-bytes, it has to be converted into decimal first. In places where a power-of-two byte count actually matters (like the block size on a file system), the size is given in bytes anyway (e.g. 4096 bytes). Was it just a little aesthetic novelty that computer makers decided to adopt, but storage medium vendors decided to disregard? Whenever you buy a hard drive, there's always a disclaimer nowadays that says "One gigabyte means one billion bytes". It would feel like using the binary definition of "gigabyte" would artificially inflate the byte count of a device, making drive-makers have to pack 1.1 terabytes into a drive in order to have it show up as "1 TB", or to simply pack 1 terabyte in and have it show up as "931 GB" (and most of them do the latter). Some people have decided to use units like "KiB" or "MiB" in favour of "KB" and "MB" in order to distinguish the two. But is there any merit to the binary prefixes in the first place? There's probably a bit of old history I'm not aware of on this topic, and if there is, I'm looking for somebody to explain it. (Apologies if this is in the wrong place. I felt that a question on best practice might belong here, but I have faith that it will be migrated to the right place if it's incorrect.)

    Read the article

  • OpenGL ES 2.0 texture distortion on large geometry

    - by Spruce
    OpenGL ES 2.0 has serious precision issues with texture sampling - I've seen topics with a similar problem, but I haven't seen a real solution to this "distorted OpenGL ES 2.0 texture" problem yet. This is not related to the texture's image format or OpenGL color buffers, it seems like it's a precision error. I don't know what specifically causes the precision to fail - it doesn't seem like it's just the size of geometry that causes this distortion, because simply scaling vertex position passed to the the vertex shader does not solve the issue. Here are some examples of the texture distortion: Distorted Texture (on OpenGL ES 2.0): http://i47.tinypic.com/3322h6d.png What the texture normally looks like (also on OpenGL ES 2.0): http://i49.tinypic.com/b4jc6c.png The texture issue is limited to small scale geometry on OpenGL ES 2.0, otherwise the texture sampling appears normal, but the grainy effect gradually worsens the further the vertex data is from the origin of XYZ(0,0,0) These texture issues do not occur on desktop OpenGL (works fine under Windows XP, Windows 7, and Mac OS X) I've only seen the problem occur on Android, iPhone, or WebGL(which is similar to OpenGL ES 2.0) All textures are power of 2 but the problem still occurs Scaling the vertex data - The values of a vertex's X Y Z location are in the range of: -65536 to +65536 floating point I realized this was large, so I tried dividing the vertex positions by 1024 to shrink the geometry and hopefully get more accurate floating point precision, but this didn't fix or lessen the texture distortion issue Scaling the modelview or scaling the projection matrix does not help Changing texture filtering options does not help Disabling mipmapping, or using GL_NEAREST/GL_LINEAR does nothing Enabling/disabling anisotropic does nothing The banding effect still occurs even when using GL_CLAMP Dividing the texture coords passed to the vertex shader and then multiplying them back to the correct values in the fragment shader, also does not work precision highp sampler2D, highp float, highp int - in the fragment or the vertex shader didn't change anything (lowp/mediump did not work either) I'm thinking this problem has to have been solved at one point - Seeing that OpenGL ES 2.0 -based games have been able to render large-scale, highly detailed geometry

    Read the article

  • What partition to use to keep data files in Ubuntu?

    - by Martin Lee
    I have been using Ubuntu for a few years and usually my partition set up was the following: Ext3 or Ext4 partition for the system itself (20 GB); A 10 GB swap partition; a big FAT32 partition to store movies, photos, work stuff, etc. (depends on the capacity of the disk, but usually it is what is left from Ext3+Swap, currently it is more than 200 GB). Does this setup sound right? I am considering to switching to one big Ext3 partition now, because the problem with Fat32 in Ubuntu has not gone anywhere: for example, right now I can access my 'big' partition with a 'Data' label only through /media/_themes?END. Pretty strange name for a partition, isn't it? some Linux software fail to read/write on this partition. For example, if I want to play around with rebar and build/make/compile things on this FAT32 partition, it will always complain about permissions and won't work (the same goes for many other kinds of software); it is not stable, I can not refer to some files on this FAT32 partition, because after the next reboot it will be called not '_themes?END', but something else. On the other side I usually begin to run out of space on the Ext3 partition after a few months of usage. So, the question is - what is the best setup of partitions for an Ubuntu system? Should a FAT32 partition be used at all?

    Read the article

  • How do I get an application to appear as a choice in update-alternatives?

    - by Jay
    I separately installed the Firefox Beta and Alpha channels, and have desktop configuration files pointing to them in ~/.local/share/applications. However, stable Firefox is being used as my default browser by the system. (Firefox Beta used to be used until I messed with the "Default Applications" in System Settings, where it is not listed.) I tried running sudo update-alternatives --config x-www-browser to manually change it, but it's only recognizing Chromium and Firefox (stable) and showing them as a choice. What can I do to get custom desktop configuration files in ~/.local/share/applications to be seen as default alternatives? I think I may have to fiddle with the desktop config files, or with mimeinfo.cache or mimeapps.list? Running Oneiric. Here is the content of the firefox-beta.desktop file I created: [Desktop Entry] Name=Firefox Beta Exec=firefox-beta -P Beta -no-remote Icon=firefox Terminal=false X-MultipleArgs=false Type=Application StartupNotify=true StartupWMClass=Firefox Categories=GNOME;GTK;Network;WebBrowser; Comment[en_US]=Firefox Beta Channel MimeType=text/html;text/xml;application/xhtml+xml;application/xml;application/vnd.mozilla.xul+xml;application/rss+xml;application/rdf+xml;image/gif;image/jpeg;image/png;x-scheme-handler/http;x-scheme-handler/https;x-scheme-handler/ftp;x-scheme-handler/chrome;video/webm; Name[en_US]=Firefox Beta [NewWindow Shortcut Group] Name=Open a New Window Exec=firefox-beta -new-window about:blank TargetEnvironment=Unity

    Read the article

  • Fixed timestep with interpolation in AS3

    - by Jim Sreven
    I'm trying to implement Glenn Fiedler's popular fixed timestep system as documented here: http://gafferongames.com/game-physics/fix-your-timestep/ In Flash. I'm fairly sure that I've got it set up correctly, along with state interpolation. The result is that if my character is supposed to move at 6 pixels per frame, 35 frames per second = 210 pixels a second, it does exactly that, even if the framerate climbs or falls. The problem is it looks awful. The movement is very stuttery and just doesn't look good. I find that the amount of time in between ENTER_FRAME events, which I'm adding on to my accumulator, averages out to 28.5ms (1000/35) just as it should, but individual frame times vary wildly, sometimes an ENTER_FRAME event will come 16ms after the last, sometimes 42ms. This means that at each graphical redraw the character graphic moves by a different amount, because a different amount of time has passed since the last draw. In theory it should look smooth, but it doesn't at all. In contrast, if I just use the ultra simple system of moving the character 6px every frame, it looks completely smooth, even with these large variances in frame times. How can this be possible? I'm using getTimer() to measure these time differences, are they even reliable?

    Read the article

  • How to create an Orthographic display in OpenGL (ES) that handles different screen sizes and orientations?

    - by Piku
    I'm trying to create an iPad/iPhone game using GLES2.0 that contains a 3D scene with a heads-up-display/GUI overlaid on the top. However, this problem would also apply if I were to port my game to a computer and run the game in a resizable window, or allow the user to change screen resolutions... When trying to make the 2D GUI/HUD work I've made the assumption that all I'm really doing is drawing a load of 2D textured 'quads' on the screen and am trying to treat the orthographic projection as an old-style 2D display with 0,0 in the upper left and screenWidth,ScreenHeight in the lower right. This causes me all sorts of confusion when I rotate my ipad into Landscape mode since I can't work out what to put into my projection and modelview matrices to turn everything around the right way. It also gets messy if I want to support the iPad's large screen, an iPhone or a Retina display since I have to then draw three sets of textures for everything and work out which ones to use. Should I be trying to map the 2D OpenGL co-ords 1:1 with the screen? While typing out this question it occurs to me that I could keep my origin in the centre, still running -1/+1 along the axes. This would let me scale my 2D content appropriately on the different screen sizes, but wouldn't I end up with the textures being scaled and possibly losing quality? I'm using OpenGLES 2.0 and have a matrix library that has equivalents to the GLES1.1 glOrthof() and glFrustrum() calls.

    Read the article

  • USB mass storage couldn't get mounted

    - by revo
    It's my android phone SD card which was indicated damaged by android yesterday night, out of the blue! I put it directly to a USB port with a USB SD card holder case, so in that way I can recover it with TestDisk, which I had experienced before on a similar situation. I also noticed that there is a change in file system and capacity: File System : RAW Capacity : 0 (unknown capacity) Also TestDisk doesn't show it on its partitions list. A 2 GB SD card is not that important in price but I've a lot of files and medias which I need them. Used a mini card reader, TestDisk displayed it on its list but a quick search and or a deep search doesn't have any results No partition found or selected for recovery and then I should quit the program. Your help is appreciated. Update #2 lsusb output: Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 002: ID 04f3:0234 Elan Microelectronics Corp. Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 002: ID 058f:6366 Alcor Micro Corp. Multi Flash Reader Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 008 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 007 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 006 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    Read the article

  • Webcast Series Part II: Integrated Infrastructure and Lifecycle Solutions for Capital Assets - A New Delivery Model

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif";} Register today for the second part of this webcast series on Thursday, November 29, 2012 10:00 a.m. PT/ 1:00 p.m. ET Project Portfolio Management solutions have immediate and lasting impact o both Provider’s and Contractor’s bottom lines by helping to manage the costs and risks of healthcare infrastructure projects from planning through handing-over and operating. During this Webcast, Integrated Infrastructure and Lifecycle Solutions for Capital Assets - A New Delivery Model, Garrett Harley and Thomas Koulouris will continue their discussion on Healthcare Infrastructure strategy changes and will cover the following topics: The shift in the Healthcare infrastructure strategy and how it will impact providers and contractors The Integrated Infrastructure & Lifecycle Solutions for Capital Assets and how these solutions help your business Communication and integration between providers and contractors and why it is so important to your bottom line The new integrated delivery system in Healthcare infrastructure and how Project Portfolio Management is so critical to the success of that system.

    Read the article

  • Should I modify an entity with many parameters or with the entity itself?

    - by Saeed Neamati
    We have a SOA-based system. The service methods are like: UpdateEntity(Entity entity) For small entities, it's all fine. However, when entities get bigger and bigger, to update one property we should follow this pattern in UI: Get parameters from UI (user) Create an instance of the Entity, using those parameters Get the entity from service Write code to fill the unchanged properties Give the result entity to the service Another option that I've experienced in previous experiences is to create semantic update methods for each update scenario. In other words instead of having one global all-encompasing update method, we had many ad-hoc parametric methods. For example, for the User entity, instead of having UpdateUser (User user) method, we had these methods: ChangeUserPassword(int userId, string newPassword) AddEmailToUserAccount(int userId, string email) ChangeProfilePicture(int userId, Image image) ... Now, I don't know which method is truly better, and for each approach, we encounter problems. I mean, I'm going to design the infrastructure for a new system, and I don't have enough reasons to pick any of these approaches. I couldn't find good resources on the Internet, because of the lack of keywords I could provide. What approach is better? What pitfalls each has? What benefits can we get from each one?

    Read the article

  • DPKG errors after upgrade to 12.10

    - by James Wulfe
    So I was doing fine then i upgraded my system to 12.10 and now i cant get my system to update all of its packages properly. no matter what i do, cleaning apt cache, manual install using dpkg, etc, i just cant get them to install. what is happening here and how do i fix this. if i would have thought 12.10 would be this much of a hassle i would have never upgraded..... here is a sampling of the code that returns from "apt-get -f install" Preparing to replace usb-modeswitch-data 20120120-0ubuntu1 (using .../usb-modeswitch-data_20120815-1_all.deb) ... /var/lib/dpkg/info/usb-modeswitch-data.prerm: 4: /var/lib/dpkg/info/usb-modeswitch-data.prerm: dpkg-maintscript-helper: Input/output error dpkg: warning: subprocess old pre-removal script returned error exit status 2 dpkg: trying script from the new package instead ... /var/lib/dpkg/tmp.ci/prerm: 4: /var/lib/dpkg/tmp.ci/prerm: dpkg-maintscript-helper: Input/output error dpkg: error processing /var/cache/apt/archives/usb-modeswitch-data_20120815-1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 2 /var/lib/dpkg/info/usb-modeswitch-data.postinst: 7: /var/lib/dpkg/info/usb-modeswitch-data.postinst: dpkg-maintscript-helper: Input/output error dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/network-manager_0.9.6.0-0ubuntu7_i386.deb /var/cache/apt/archives/pcmciautils_018-8_i386.deb /var/cache/apt/archives/unity-common_6.10.0-0ubuntu2_all.deb /var/cache/apt/archives/whoopsie_0.2.7_i386.deb /var/cache/apt/archives/usb-modeswitch_1.2.3+repack0-1ubuntu3_i386.deb /var/cache/apt/archives/usb-modeswitch-data_20120815-1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) It is also just these 6 packages only. no other packages have given me this kind of trouble. well i should say as of now. It was just 5, but them i got an update for unity, and now unity-common is added to the trouble makers. which prevents me from further upgrading the actual unity package as this package is a dependancy.....

    Read the article

< Previous Page | 646 647 648 649 650 651 652 653 654 655 656 657  | Next Page >