Search Results

Search found 27426 results on 1098 pages for 'build tools'.

Page 258/1098 | < Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >

  • Developing a Cost Model for Cloud Applications

    - by BuckWoody
    Note - please pay attention to the date of this post. As much as I attempt to make the information below accurate, the nature of distributed computing means that components, units and pricing will change over time. The definitive costs for Microsoft Windows Azure and SQL Azure are located here, and are more accurate than anything you will see in this post: http://www.microsoft.com/windowsazure/offers/  When writing software that is run on a Platform-as-a-Service (PaaS) offering like Windows Azure / SQL Azure, one of the questions you must answer is how much the system will cost. I will not discuss the comparisons between on-premise costs (which are nigh impossible to calculate accurately) versus cloud costs, but instead focus on creating a general model for estimating costs for a given application. You should be aware that there are (at this writing) two billing mechanisms for Windows and SQL Azure: “Pay-as-you-go” or consumption, and “Subscription” or commitment. Conceptually, you can consider the former a pay-as-you-go cell phone plan, where you pay by the unit used (at a slightly higher rate) and the latter as a standard cell phone plan where you commit to a contract and thus pay lower rates. In this post I’ll stick with the pay-as-you-go mechanism for simplicity, which should be the maximum cost you would pay. From there you may be able to get a lower cost if you use the other mechanism. In any case, the model you create should hold. Developing a good cost model is essential. As a developer or architect, you’ll most certainly be asked how much something will cost, and you need to have a reliable way to estimate that. Businesses and Organizations have been used to paying for servers, software licenses, and other infrastructure as an up-front cost, and power, people to the systems and so on as an ongoing (and sometimes not factored) cost. When presented with a new paradigm like distributed computing, they may not understand the true cost/value proposition, and that’s where the architect and developer can guide the conversation to make a choice based on features of the application versus the true costs. The two big buckets of use-types for these applications are customer-based and steady-state. In the customer-based use type, each successful use of the program results in a sale or income for your organization. Perhaps you’ve written an application that provides the spot-price of foo, and your customer pays for the use of that application. In that case, once you’ve estimated your cost for a successful traversal of the application, you can build that into the price you charge the user. It’s a standard restaurant model, where the price of the meal is determined by the cost of making it, plus any profit you can make. In the second use-type, the application will be used by a more-or-less constant number of processes or users and no direct revenue is attached to the system. A typical example is a customer-tracking system used by the employees within your company. In this case, the cost model is often created “in reverse” - meaning that you pilot the application, monitor the use (and costs) and that cost is held steady. This is where the comparison with an on-premise system becomes necessary, even though it is more difficult to estimate those on-premise true costs. For instance, do you know exactly how much cost the air conditioning is because you have a team of system administrators? This may sound trivial, but that, along with the insurance for the building, the wiring, and every other part of the system is in fact a cost to the business. There are three primary methods that I’ve been successful with in estimating the cost. None are perfect, all are demand-driven. The general process is to lay out a matrix of: components units cost per unit and then multiply that times the usage of the system, based on which components you use in the program. That sounds a bit simplistic, but using those metrics in a calculation becomes more detailed. In all of the methods that follow, you need to know your application. The components for a PaaS include computing instances, storage, transactions, bandwidth and in the case of SQL Azure, database size. In most cases, architects start with the first model and progress through the other methods to gain accuracy. Simple Estimation The simplest way to calculate costs is to architect the application (even UML or on-paper, no coding involved) and then estimate which of the components you’ll use, and how much of each will be used. Microsoft provides two tools to do this - one is a simple slider-application located here: http://www.microsoft.com/windowsazure/pricing-calculator/  The other is a tool you download to create an “Return on Investment” (ROI) spreadsheet, which has the advantage of leading you through various questions to estimate what you plan to use, located here: https://roianalyst.alinean.com/msft/AutoLogin.do?d=176318219048082115  You can also just create a spreadsheet yourself with a structure like this: Program Element Azure Component Unit of Measure Cost Per Unit Estimated Use of Component Total Cost Per Component Cumulative Cost               Of course, the consideration with this model is that it is difficult to predict a system that is not running or hasn’t even been developed. Which brings us to the next model type. Measure and Project A more accurate model is to actually write the code for the application, using the Software Development Kit (SDK) which can run entirely disconnected from Azure. The code should be instrumented to estimate the use of the application components, logging to a local file on the development system. A series of unit and integration tests should be run, which will create load on the test system. You can use standard development concepts to track this usage, and even use Windows Performance Monitor counters. The best place to start with this method is to use the Windows Azure Diagnostics subsystem in your code, which you can read more about here: http://blogs.msdn.com/b/sumitm/archive/2009/11/18/introducing-windows-azure-diagnostics.aspx This set of API’s greatly simplifies tracking the application, and in fact you can use this information for more than just a cost model. After you have the tracking logs, you can plug the numbers into ay of the tools above, which should give a representative cost or in some cases a unit cost. The consideration with this model is that the SDK fabric is not a one-to-one comparison with performance on the actual Windows Azure fabric. Those differences are usually smaller, but they do need to be considered. Also, you may not be able to accurately predict the load on the system, which might lead to an architectural change, which changes the model. This leads us to the next, most accurate method for a cost model. Sample and Estimate Using standard statistical and other predictive math, once the application is deployed you will get a bill each month from Microsoft for your Azure usage. The bill is quite detailed, and you can export the data from it to do analysis, and using methods like regression and so on project out into the future what the costs will be. I normally advise that the architect also extrapolate a unit cost from those metrics as well. This is the information that should be reported back to the executives that pay the bills: the past cost, future projected costs, and unit cost “per click” or “per transaction”, as your case warrants. The challenge here is in the model itself - statistical methods are not foolproof, and the larger the sample (in this case I recommend the entire population, not a smaller sample) is key. References and Tools Articles: http://blogs.msdn.com/b/patrick_butler_monterde/archive/2010/02/10/windows-azure-billing-overview.aspx http://technet.microsoft.com/en-us/magazine/gg213848.aspx http://blog.codingoutloud.com/2011/06/05/azure-faq-how-much-will-it-cost-me-to-run-my-application-on-windows-azure/ http://blogs.msdn.com/b/johnalioto/archive/2010/08/25/10054193.aspx http://geekswithblogs.net/iupdateable/archive/2010/02/08/qampa-how-can-i-calculate-the-tco-and-roi-when.aspx   Other Tools: http://cloud-assessment.com/ http://communities.quest.com/community/cloud_tools

    Read the article

  • Delphi's OTA: is there a way to get active configuration while building (D2010)?

    - by Alexander
    I can ask Delphi to build all configurations at once - by clicking on "Build configurations" and invoking "Make" command: This will build all configurations, one after another. The problem is that we have an IDE expert, which must react on compilation events. We register IOTAIDENotifier80 to hook events. There are BeforeBuild and AfterBuild events - we're interested in those. IOTAProject is passed to each event. The problem is: the active configuration is never changed. I.e. if you have "Debug" configuration selected (maked in bold) - all calls to BeforeBuild/AfterBuild events will return debug configuration profile (even though IDE compiles different profiles one after another). I mean properties of IOTAProject here. I also tried to use IOTAProjectOptionsConfigurations, but its ActiveConfiguration property always return the same "bolded" profile, regardless of current compiled one. The question is: is there a way to get the "real" current profile?

    Read the article

  • Silverlight project builds in VS2008 but fails when using MSBuild

    - by Tom
    Hi, We have 2 Silverlight projects in the same solution; SLGlobalResource and SLData. SLData references SLGlobalResource (using references add reference projects). When we build it in debug within VS2008, everything builds fine and all is good. But when we build it using: msbuild TheSolution.sln /p:Configuration=Debug /t:rebuild SLData fails with the following error: ViewModels\ImportViewModel.cs : error CS0246: The type of name space "SLGlobalResource" could not be found (are you missing a using directive or an assembly reference?) This also happens in TeamCity (I guess because the TeamCity vs2008 runner uses MSBuild) Any ideas? Thanks Edit: There are actually 33 projects in total in the solution. I didn't think this was relevant before but now I'm thinking it could be - could this be a build order thing?

    Read the article

  • Can't get Xdebug to work on Windows 7

    - by Derek
    I installed the latest XAMPP package which includes PHP 5.3.0. I am trying to enable Xdebug, but it just won't work. Here's what I changed in the php.ini shipped with XAMPP: ; uncommented zend_extension = "X:\xampp\php\ext\php_xdebug.dll" ; added the following lines: xdebug.remote_enable=true xdebug.remote_host=localhost xdebug.remote_port=9000 xdebug.remote_handler=dbgp Apache starts fine, but when I open http://localhost/ in my browser, I get the following error If I click the Close the program button, the error message will reappear in a second as if it was in an infinite loop. I'd greatly appreciate any help in getting this to work. I am running a fresh install of Windows 7 Ultimate 64-bit. EDIT: From the result of phpinfo(): Zend Extension Build API220090626,TS,VC6 PHP Extension Build API20090626,TS,VC6 Debug Build no Thread Safety enabled

    Read the article

  • Does a recursive Ant task exist to recover properties from external file?

    - by Julia2020
    Hi, I ve got a problem in getting properties with ant from a properties file. With a simple target like this in my build.xml, i'd like to get at least two properties path1 and path2. I'd like to have a generic target to get this two properties.... in order to avoid modifying the build.xml (just adding a new prop) Any suggestions? Thanks in advance ! build.xml : <target name="TEST" description="test ant"> <property file="dependencies.properties"/> <svn> <export srcUrl="${path.prop}" destPath="${workspace}/rep/" /> </svn> </target> dependencies.properties : path1.prop = /path/to/src1 path2.prop = /path/to/src2

    Read the article

  • Error while installing dependencies for PyGTK on Mac OS 10.6.3

    - by Winston C. Yang
    I tried to install the following dependencies for PyGTK 2.16.0 (the Python GIMP Tool Kit) on Mac OS 10.6.3: glib 2.25.5 gettext-0.18 libiconv-1.13.1 When I tried to install glib, I got the following error message: gconvert.c:55:2: error: #error GNU libiconv not in use but included iconv.h is from libiconv The libiconv web page talks about a circular dependency between gettext and libiconv---build one, then build the other, then build the first again. I tried to do this, though possibly incorrectly. (Will the following work: make distclean; ./configure; make; sudo make install?) The author of a posting had the same problem, and he solved it by installing libiconv-1.13.1. Could anyone explain the error in more detail, and how to correct it?

    Read the article

  • TFSBuild/MSBuild and Project Reference vs File Reference

    - by anon
    We Have a large VS solution using project references which is build by TFS Build like so: Solution - Project 1 - Project 2 - Project ... - Project N Because the solution is too large we have several smaller solutions which we use day to day: SubSolution - Project 1 - Project 19 The problem is that developers working on SubSolution find that it is not building because the project references could not be found, so they change the projects to use file references. This then goes on to break the TFS Build which cannot find these file references because they have not been built yet (Even though the projects are in the same solution). Is there a way around this tug of war between the two types of references. What is the correct way of splitting out your solutions?

    Read the article

  • How to properly manage multi-level SUBDIRS in Makefile.am:s?

    - by Jukka Dahlbom
    Working on platform WinXP with MinGW (gcc4.4) / MSYS, I am trying to get autotools build working for Apache Axis C, which does not support MinGW yet. A common issue automake complains about is caused by following lines in various Makefile.am:s axis-c-trunk/src/core/Makefile.am: SUBDIRS = [other child dirs] deployment transport/http/util transport/http/common engine transport The intent of this line is to force the order of building so that transport/http/util and transport/http/common are build before the engine directory, and building rest of the transport after engine is build. This line causes the following error when running automake under MinGW: src/core/Makefile.am:1: directory should not contain `/' Now, what would be the correct way of directly including grandchildren directories so that it would functionally work like ordinary SUBDIRS inclusion for immediate child directories?

    Read the article

  • Why doesn't Maven's mvn clean ever work the first time?

    - by hoffmandirt
    Nine times out of ten when I run mvn clean on my projects I experience a build error. I have to execute mvn clean multiple times until the build error goes away. Does anyone else experience this? Is there any way to fix this within Maven? If not, how do you get around it? I wrote a bat file that deletes the target folders and that works well, but it's not practical when you are working on multiple projects. I am using Maven 2.2.1. [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failed to delete directory: C:\Documents and Settings\user\My Documents\software-developm ent\a\b\c\application-domain\target. Reason: Unable to delete directory C:\Documen ts and Settings\user\My Documents\software-development\a\b\c\application-domai n\target\classes\com\a\b [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 6 seconds [INFO] Finished at: Fri Oct 23 15:22:48 EDT 2009 [INFO] Final Memory: 11M/254M [INFO] ------------------------------------------------------------------------

    Read the article

  • Do not compile t4 file

    - by brian b
    Suddenly, after doing a TFS 2010 get, Visual Studio 2010 is attempting to compile my .tt file as if it was c#. Moreover, anytime I set it to "Build Action=None", Build Action gets mysteriously reset to Compile. This is breaking our builds on the desktop. I can get builds to work on the desktop by closing then reopening VS. Our builds on TFS are totally broken because of this. What to do? The template generates a (totally ok) c# file, so I need the project to build. I tried changing the file extension from .tt to .donotbuilddammit but that had no effect.

    Read the article

  • Visual Studio 2010 Productivity Tips and Tricks&ndash;Part 1: Extensions

    - by ToStringTheory
    I don’t know about you, but when it comes to development, I prefer my environment to be as free of clutter as possible.  It may surprise you to know that I have tried ReSharper, and did not like it, for the reason that I stated above.  In my opinion, it had too much clutter.  Don’t get me wrong, there were a couple of features that I did like about it (inversion of if blocks, code feedback), but for the most part, I actually felt that it was slowing me down. Introduction Another large factor besides intrusiveness/speed in my choice to dislike ReSharper would probably be that I have become comfortable with my current setup and extensions.  I believe I have a good collection, and am quite happy with what I can accomplish in a short amount of time.  I figured that I would share some of my tips/findings regarding Visual Studio productivity here, and see what you had to say. The first section of things that I would like to cover, are Visual Studio Extensions.  In case you have been living under a rock for the past several years, Extensions are available under the Tools menu in Visual Studio: The extension manager enables integrated access to the Microsoft Visual Studio Gallery online with access to a few thousand different extensions.  I have tried many extensions, but for reasons of lack reliability, usability, or features, have uninstalled almost all of them.  However, I have come across several that I find I can not do without anymore: NuGet Package Manager (Microsoft) Perspectives (Adam Driscoll) Productivity Power Tools (Microsoft) Web Essentials (Mads Kristensen) Extensions NuGet Package Manager To be honest, I debated on whether or not to put this in here.  Most people seem to have it, however, there was a time when I didn’t, and was always confused when blogs/posts would say to right click and “Add Package Reference…” which with one of the latest updates is now “Manage NuGet Packages”.  So, if you haven’t downloaded the NuGet Package Manager yet, or don’t know what it is, I would highly suggest downloading it now! Features Simply put, the NuGet Package Manager gives you a GUI and command line to access different libraries that have been uploaded to NuGet. Some of its features include: Ability to search NuGet for packages via the GUI, with information in the detail bar on the right. Quick access to see what packages are in a solution, and what packages have updates available, with easy 1-click updating. If you download a package that requires references to work on other NuGet packages, they will be downloaded and referenced automatically. Productivity Tip If you use any type of source control in Visual Studio as well as using NuGet packages, be sure to right-click on the solution and click "Enable NuGet Package Restore". What this does is add a NuGet package to the solution so that it will be checked in along side your solution, as well as automatically grab packages from NuGet on build if needed. This is an extremely simple system to use to manage your package references, instead of having to manually go into TFS and add the Packages folder. Perspectives I can't stand developing with just one monitor. Especially if it comes to debugging. The great thing about Visual Studio 2010, is that all of the panels and windows are floatable, and can dock to other screens. The only bad thing is, I don't use the same toolset with everything that I am doing. By this, I mean that I don't use all of the same windows for debugging a web application, as I do for coding a WPF application. Only thing is, Visual Studio doesn't save the screen positions for all of the undocked windows. So, I got curious one day and decided to check and see if there was an extension to help out. This is where I found Perspectives. Features Perspectives gives you the ability to configure window positions across any or your monitors, and then to save the positions in a profile. Perspectives offers a Panel to manage different presets/favorites, and a toolbar to add to the toolbars at the top of Visual Studio. Ability to 'Favorite' a profile to add it to the perspectives toolbar. Productivity Tip Take the time to setup profiles for each of your scenarios - debugging web/winforms/xaml, coding, maintenance, etc. Try to remember to use the profiles for a few days, and at the end of a week, you may find that your productivity was never better. Productivity Power Tools Ah, the Productivity Power Tools... Quite possibly one of my most used extensions, if not my most used. The tool pack gives you a variety of enhancements ranging from key shortcuts, interface tweaks, and completely new features to Visual Studio 2010. Features I don't want to bore you with all of the features here, so here are my favorite: Quick Find - Unobtrusive search box in upper-right corner of the code window. Great for searching in general, especially in a file. Solution Navigator - The 'Solution Explorer' on steroids. Easy to search for files, see defined members/properties/methods in files, and my favorite feature is the 'set as root' option. Updated 'Add Reference...' Dialog - This is probably my favorite enhancement period... The 'Add Reference...' dialog redone in a manner that resembles the Extension/Package managers. I especially love the ability to search through all of the references. "Ctrl - Click" for Definition - I am still getting used to this as I usually try to use my keyboard for everything, but I love the ability to hold Ctrl and turn property/methods/variables into hyperlinks, that you click on to see their definitions. Great for travelling down a rabbit hole in an application to research problems. While there are other commands/utilities, I find these to be the ones that I lean on the most for the usefulness. Web Essentials If you have do any type of web development in ASP .Net, ASP .Net MVC, even HTML, I highly suggest grabbing the Web Essentials right NOW! This extension alone is great for productivity in web development, and greatly decreases my development time on new features. Features Some of its best features include: CSS Previews - I say 'previews' because of the multiple kinds of previews in CSS that you get font-family, color, background/background-image previews. This is great for just tweaking UI slightly in different ways and seeing how they look in the CSS window at a glance. Live Preview - One word - awesome! This goes well with my multi-monitor setup. I put the site on one monitor in a Live Preview panel, and then as I make changes to CSS/cshtml/aspx/html, the preview window will update with each save/build automatically. For CSS, you can even turn on live-update, so as you are tweaking CSS, the style changes in real time. Great for tweaking colors or font-sizes. Outlining - Small, but I like to be able to collapse regions/declarations that are in the way of new work, or are just distracting. Commenting Shortcuts - I don't know why it wasn't included by default, but it is nice to have the key shortcuts for commenting working in the CSS editor as well. Productivity Tip When working on a site, hit CTRL-ALT-ENTER to launch the Live Preview window. Dock it to another monitor. When you make changes to the document/css, just save and glance at the other monitor. No need to alt tab, then alt tab before continuing editing. Conclusion These extensions are only the most useful and least intrusive - ones that I use every day. The great thing about Visual Studio 2010 is the extensibility options that it gives developers to utilize. Have an extension that you use that isn't intrusive, but isn't listed here? Please, feel free to comment. I love trying new things, and am always looking for new additions to my toolset of the most useful. Finally, please keep an eye out for Part 2 on key shortcuts in Visual Studio. Also, if you are visiting my site (http://tostringtheory.com || http://geekswithblogs.net/tostringtheory) from an actual browser and not a feed, please let me know what you think of the new styling!

    Read the article

  • Factory Girl Association

    - by David Lyod
    I have an association of a Admin - Account in factory girl I now wish to associate a second user with the same account but am unable to do so. I build my Admin-Account association like this u.account { |account| account.association(:account)} This works fine and creates the Account and Admin association. Im looking for a way to setup a second user who's account also points to the record created in the Admin factory association. I currently just build the second user as such @user = Factory.build(:seconduser) @user.account = Account.first @user.save! Which works but seems somewhat hacky .

    Read the article

  • sharing web user controls across projects.

    - by Kyle
    I've done this using a regular .cs file that just extends System.Web.UI.UserControl and then included the assembly of the project that contains the control into other projects. I've also created .ascx files in one project then copied all ascx files from a specified folder in the properties-Build Events-Pre-build event. Now what I want to do is a combination of those two: I want to be able to use ascx files that I build in one project, in all of my other projects but I want to include them just using assembly references rather than having to copy them to my "secondary" projects as that seems a ghetto way to accomplish what I want to do. It works yes, but it's not very elegant. Can anyone let me know if this even possible, and if so, what the best way to approach this is?

    Read the article

  • Why is javac failing on @Override annotation

    - by skiphoppy
    Eclipse is adding @Override annotations when I implement methods of an interface. Eclipse seems to have no problem with this. And our automated build process from Cruise Control seems to have no problem with this. But when I build from the command-line, with ant running javac, I get this error: [javac] C:\path\project\src\com\us\MyClass.java:70: method does not override a method from its superclass [javac] @Override [javac] ^ [javac] 1 error Eclipse is running under Java 1.6. Cruise Control is running Java 1.5. My ant build fails regardless of which version of Java I use.

    Read the article

  • Delivering the Integrated Portal Experience!

    - by Michael Snow
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Guest post by Richard Maldonado, Principal Product Manager, Oracle WebCenter Portal Organizations are still struggling to standardize on a user interaction platform which can meet the needs of all their target audiences.  This has not only resulted in inefficient and inconsistent experiences for their users, but it also creates inefficiencies (productivity and costs) for the departments that manage the applications and information systems.  Portals have historically been the unifying platform that provide IT with a common interface which can securely surface the most relevant interactions for a given user and/or group of users.  However, organizations have found that the technologies available have either not provided the flexibility necessary to address all of their use cases, or they rely too much on IT resources to manage, maintain, and evolve.  Empowering  the Business Groups The core issue that IT departments face with delivering portal experiences is having enough resources to respond and address the influx of requirements which come in from the business.  Commonly, when a business group wants a new portal site established for their group, they will submit a request to the IT dept, the IT dept then assigns a resource to an administrator and/or developer to build.  Unfortunately, this approach is not scalable, it can be a time consuming activity which requires significant interaction between the business owner and the IT resource.  A modern user interaction platforms should empower the business groups by providing them tools which they can use to build and manage the portal experiences without the need for IT's involvement.  And because business groups rarely have technical resources (developers) on staff, the tools must be easy enough that virtually any business user could use.  In addition, the tool must be powerful enough to allow them to build the experience that they need, things such as creating a whole new portal, add/manage page and page hierarchy, manage user/group access, add/modify components within the page, etc.  This balance between ease-of-use and flexibility is key to the successful adoption of tools which will ultimately reduce the burden on IT, respond to the needs of the business, and deliver high-value experiences for the users.  Ready or Not, Here They Come: Smartphones and Tablets Recently, several studies have highlighted that smartphone and tablet-style devices have overtaken PC's in both sales and usage.  This shift is further driving organizations to revaluate how they're delivering data, information, and applications to their users.  Users are expecting to get the same level of access and interaction, but in a ways which are optimized for the capabilities of the device that they are using.  Expect More With the ever growing number of new IT projects and flat/shrinking budgets, organizations are looking for comprehensive solutions which can deliver integrated web experiences that are tailored for the users and optimized for mobile devices.  Piecing together a number of point solutions is no longer an option.  A modern portal technology should not only address the traditional needs of integrating and surfacing back-end applications/information, but it should enable the business through easy-to-use tools and accelerate the delivery of mobile optimized experiences.   v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} 12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} 12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} WebCenter in Action Series: Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter Featuring Qualcomm & Keste 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} 12.00 Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-fareast- mso-bidi-font-family:"Times New Roman";}

    Read the article

  • VS2010 always relinks the project

    - by Rob Walker
    I am migrating a complex mixed C++/.NET solution from VS2008 to VS2010. The upgraded solution works in VS2010, but the build system is always refereshing one C++/CLI assembly. It doesn't recompile anything, but the linker touches the file. The causes a ripple effect downstream in the build as a whole bunch of dependent then get rebuilt. Any ideas on how to find out why it thinks it needs to relink the file? I've turned on verbose build logging, but nothing stands out.

    Read the article

  • building mono from svn - android target

    - by Jeremy Bell
    There were patches made to mono on trunk svn to support android. My understanding is that essentially instead of Koush's system which builds mono using the android NDK build system directly, these patches add support for the android NDK using the regular mono configure.sh process. I'd like to play around with this patch, but not being an expert in the mono build system, I have no idea how to tell it to target the android NDK, or even where to look. I've been able to build mono from SVN using the default target (linux) on Ubuntu, but no documentation on how to target android was given with the patches. Since anyone not submitting or reviewing a patch is generally ignored on the mono mailing list, I figured I'd post the question here.

    Read the article

  • Signing an unsigned assembly

    - by dagda1
    The recent upgrade of NHibernate 2.1 has brought a mega headache situation to the surface. It seems most of the projects build by default as signed assemblies. For example fluentnhibernate references the keyfile fluent.snk. Nhibernate.search builds unsigned from what I can gather and will not build signed that is if you reference a generated keyfile, you get the error: Referenced assembly 'Lucene.Net' does not have a strong name This means projects like castle.activerecord that have nhibernate.search as a dependency will not build as you get the horrendous error referenced assembly nhibernate.search does not have a strong name: Quite a few projects use caslte.activerecord so it is quite important that this builds. Has anyone any idea what to do here as I am totally out of ideas? This is complete madness.

    Read the article

  • Building MobileVLCKit from git.videolan.org repository on macOsX with XCode

    - by ztepsic
    I would like to make an application for iOS(iPhone and iPad) that can play streaming videos through RTSP protocol (that includes mms). I imagined to achive a specified application using VLC player or libVLC library. On the official vlc git repository (http://git.videolan.org/?p=vlc.git;a=tree) in projects/macosx/framework/ folder there is xcode project MobileVLCKit.xcodeproj for which I assume that is a somewhat usable VLC framework for iOS. Now the problem is that I can't/don't know how to build this project. When I try to build MobileVLCKit.xcodeproj I get an error that says it can't find files inside extras/contrib/hosts/i686-apple-darwin10/ios/ folder. I have looked within that folder (extras/contrib) and managed to create folder (with files) extras/contrib/hosts/i686-apple-darwin10/ with make, but there is no ios folder. So, does anybody knows how to successfully build MobileVLCKit?

    Read the article

  • from C to assembly

    - by lego69
    how can I get assembly code from C program I used this recommendation and I use something like this -c -fmessage-length=0 -O2 -S in Eclipse, but I've got an error, thanks in advance for any help I have this error **** Internal Builder is used for build **** gcc -O0 -g3 -Wall -c -fmessage-length=0 -O2 -S -oatam.o ..\atam.c gcc -oatam.exe atam.o D:\technion\2sem\matam\eclipse\eclipse\mingw\bin\..\lib\gcc\mingw32\3.4.5\..\..\..\..\mingw32\bin\ld.exe:atam.o: file format not recognized; treating as linker script D:\technion\2sem\matam\eclipse\eclipse\mingw\bin\..\lib\gcc\mingw32\3.4.5\..\..\..\..\mingw32\bin\ld.exe:atam.o:1: syntax error collect2: ld returned 1 exit status Build error occurred, build is stopped Time consumed: 281 ms.

    Read the article

  • Eclipse and Cassandra

    - by H2oNinja
    I've searched various websites for instructions on how to link 'Cassandra' and 'Eclipse' and followed directions to the last detail in several sites. For some reason, while using Git Bash, I cant get through the 'ant build', via instruction in said mentioned locations across the web. In some location's its easy, just make sure you have; 1. Apache Cassandra source 2. Apache Ant 3. Git So, yeah I've downloaded all the above, tried the same directory, different directories, etc., although still unable to get past the middle step of 'ant build'. Here are a few websites I've used to muddle through setting up the Src code for both utilities, 'Eclipse' and 'Cassandra'. http://uisurumadushanka89.blogspot.com/2012/02/apache-cassandra-how-to-setup-source.html and http://wiki.apache.org/cassandra/RunningCassandraInEclipse both resulting in an immediate halt at the 'ant build'. any insights are information is greatly appreciated. Thank-you, Ryan

    Read the article

  • VSS: How do I recover from "File <foo> has been destroyed, and cannot be rebuilt."?

    - by Eniac
    We're running Visual SourceSafe 6.0 (build 8163). In one project, there's an old label I want to do a Get on, but a few files have been added and destroyed since that label was created. Now everytime I try to do a Get on the label - for each destroyed file - I get the warning "File has been destroyed and cannot be rebuilt, do you want to continue?", (which seems completely stupid, since the destroyed files never existed before the label was set). I've tried adding files with the same name, but that didn't help. I also tried deleting (not destroying) those added files, but that didn't help either. I really want to be rid of the warning, since the home-cooked building app we use to build all the projects doesn't handle this error/warning, and hence can't Get the label requested and build that project. Help! (and no, running VSS is not by choice, trust me, I was hoping never to see it again after the first time I was forced to use it, which was ten years ago)

    Read the article

  • make include directive and dependency generation with -MM

    - by Robert S. Barnes
    I want a build rule to be triggered by an include directive if the target of the include is out of date or doesn't exist. Currently the makefile looks like this: program_NAME := wget++ program_H_SRCS := $(wildcard *.h) program_CXX_SRCS := $(wildcard *.cpp) program_CXX_OBJS := ${program_CXX_SRCS:.cpp=.o} program_OBJS := $(program_CXX_OBJS) DEPS = make.deps .PHONY: all clean distclean all: $(program_NAME) $(DEPS) $(program_NAME): $(program_OBJS) $(LINK.cc) $(program_OBJS) -o $(program_NAME) clean: @- $(RM) $(program_NAME) @- $(RM) $(program_OBJS) @- $(RM) make.deps distclean: clean make.deps: $(program_CXX_SRCS) $(program_H_SRCS) $(CXX) $(CPPFLAGS) -MM $(program_CXX_SRCS) > make.deps include $(DEPS) The problem is that it seems like the include directive is executing before the rule to build make.deps which effectively means make either getting no dependency list if make.deps doesn't exist or always getting the make.deps from the previous build and not the current one. For example: $ make clean $ make makefile:32: make.deps: No such file or directory g++ -MM addrCache.cpp connCache.cpp httpClient.cpp wget++.cpp > make.deps g++ -c -o addrCache.o addrCache.cpp g++ -c -o connCache.o connCache.cpp g++ -c -o httpClient.o httpClient.cpp g++ -c -o wget++.o wget++.cpp g++ addrCache.o connCache.o httpClient.o wget++.o -o wget++

    Read the article

  • How to setup directories in Visual Studio when using boost?

    - by Rich
    Hi, I have introduced boost to our code base, on my machine I created a boost directory called Thirdparty.Boost and added that as an additional include directory in my Visual Studio setting, all is fine. However I now want to check in my changes, so the rest of the team can get them. Inorder to build the code they would need to setup boost as I have (problem number 1). In addition we have a build server, which will need changing (problem 2). I have a way of distributing boost to everyone including the build server, so that's not a problem I need a way of referring to the boost directory without changing the default settings in Visual Studio. Why don't you change it on a project level I hear you cry? The solution has over 200 projects, which would require a lot of changes. I just wondered if there was another way? Cheers Rich

    Read the article

  • Conditionally execute a task after building a solution with MSBuild + TFS

    - by SoMoS
    Hello, I'm using MSBuild with TFS and I have to build 4 solutions. When the compilation is done I should launch upon to 4 different Exec tasks depending on wherever the compilation was successful or not. I know how to do that with MSBuild alone using targets with conditions using the var $(BuildBreak) because I can do build solution - check result - exec task - build ... but I don't know how to do that with the TFS extensions ... any help will be very appreciated. Thanks mates.

    Read the article

< Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >