Search Results

Search found 59071 results on 2363 pages for 'build system'.

Page 195/2363 | < Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >

  • CI Deployment Of Azure Web Roles Using TeamCity

    - by srkirkland
    After recently migrating an important new website to use Windows Azure “Web Roles” I wanted an easier way to deploy new versions to the Azure Staging environment as well as a reliable process to rollback deployments to a certain “known good” source control commit checkpoint.  By configuring our JetBrains’ TeamCity CI server to utilize Windows Azure PowerShell cmdlets to create new automated deployments, I’ll show you how to take control of your Azure publish process. Step 0: Configuring your Azure Project in Visual Studio Before we can start looking at automating the deployment, we should make sure manual deployments from Visual Studio are working properly.  Detailed information for setting up deployments can be found at http://msdn.microsoft.com/en-us/library/windowsazure/ff683672.aspx#PublishAzure or by doing some quick Googling, but the basics are as follows: Install the prerequisite Windows Azure SDK Create an Azure project by right-clicking on your web project and choosing “Add Windows Azure Cloud Service Project” (or by manually adding that project type) Configure your Role and Service Configuration/Definition as desired Right-click on your azure project and choose “Publish,” create a publish profile, and push to your web role You don’t actually have to do step #4 and create a publish profile, but it’s a good exercise to make sure everything is working properly.  Once your Windows Azure project is setup correctly, we are ready to move on to understanding the Azure Publish process. Understanding the Azure Publish Process The actual Windows Azure project is fairly simple at its core—it builds your dependent roles (in our case, a web role) against a specific service and build configuration, and outputs two files: ServiceConfiguration.Cloud.cscfg: This is just the file containing your package configuration info, for example Instance Count, OsFamily, ConnectionString and other Setting information. ProjectName.Azure.cspkg: This is the package file that contains the guts of your deployment, including all deployable files. When you package your Azure project, these two files will be created within the directory ./[ProjectName].Azure/bin/[ConfigName]/app.publish/.  If you want to build your Azure Project from the command line, it’s as simple as calling MSBuild on the “Publish” target: msbuild.exe /target:Publish Windows Azure PowerShell Cmdlets The last pieces of the puzzle that make CI automation possible are the Azure PowerShell Cmdlets (http://msdn.microsoft.com/en-us/library/windowsazure/jj156055.aspx).  These cmdlets are what will let us create deployments without Visual Studio or other user intervention. Preparing TeamCity for Azure Deployments Now we are ready to get our TeamCity server setup so it can build and deploy Windows Azure projects, which we now know requires the Azure SDK and the Windows Azure PowerShell Cmdlets. Installing the Azure SDK is easy enough, just go to https://www.windowsazure.com/en-us/develop/net/ and click “Install” Once this SDK is installed, I recommend running a test build to make sure your project is building correctly.  You’ll want to setup your build step using MSBuild with the “Publish” target against your solution file.  Mine looks like this: Assuming the build was successful, you will now have the two *.cspkg and *cscfg files within your build directory.  If the build was red (failed), take a look at the build logs and keep an eye out for “unsupported project type” or other build errors, which will need to be addressed before the CI deployment can be completed. With a successful build we are now ready to install and configure the Windows Azure PowerShell Cmdlets: Follow the instructions at http://msdn.microsoft.com/en-us/library/windowsazure/jj554332 to install the Cmdlets and configure PowerShell After installing the Cmdlets, you’ll need to get your Azure Subscription Info using the Get-AzurePublishSettingsFile command. Store the resulting *.publishsettings file somewhere you can get to easily, like C:\TeamCity, because you will need to reference it later from your deploy script. Scripting the CI Deploy Process Now that the cmdlets are installed on our TeamCity server, we are ready to script the actual deployment using a TeamCity “PowerShell” build runner.  Before we look at any code, here’s a breakdown of our deployment algorithm: Setup your variables, including the location of the *.cspkg and *cscfg files produced in the earlier MSBuild step (remember, the folder is something like [ProjectName].Azure/bin/[ConfigName]/app.publish/ Import the Windows Azure PowerShell Cmdlets Import and set your Azure Subscription information (this is basically your authentication/authorization step, so protect your settings file Now look for a current deployment, and if you find one Upgrade it, else Create a new deployment Pretty simple and straightforward.  Now let’s look at the code (also available as a gist here: https://gist.github.com/3694398): $subscription = "[Your Subscription Name]" $service = "[Your Azure Service Name]" $slot = "staging" #staging or production $package = "[ProjectName]\bin\[BuildConfigName]\app.publish\[ProjectName].cspkg" $configuration = "[ProjectName]\bin\[BuildConfigName]\app.publish\ServiceConfiguration.Cloud.cscfg" $timeStampFormat = "g" $deploymentLabel = "ContinuousDeploy to $service v%build.number%"   Write-Output "Running Azure Imports" Import-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\*.psd1" Import-AzurePublishSettingsFile "C:\TeamCity\[PSFileName].publishsettings" Set-AzureSubscription -CurrentStorageAccount $service -SubscriptionName $subscription   function Publish(){ $deployment = Get-AzureDeployment -ServiceName $service -Slot $slot -ErrorVariable a -ErrorAction silentlycontinue   if ($a[0] -ne $null) { Write-Output "$(Get-Date -f $timeStampFormat) - No deployment is detected. Creating a new deployment. " } if ($deployment.Name -ne $null) { #Update deployment inplace (usually faster, cheaper, won't destroy VIP) Write-Output "$(Get-Date -f $timeStampFormat) - Deployment exists in $servicename. Upgrading deployment." UpgradeDeployment } else { CreateNewDeployment } }   function CreateNewDeployment() { write-progress -id 3 -activity "Creating New Deployment" -Status "In progress" Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: In progress"   $opstat = New-AzureDeployment -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service   $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot $completeDeploymentID = $completeDeployment.deploymentid   write-progress -id 3 -activity "Creating New Deployment" -completed -Status "Complete" Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: Complete, Deployment ID: $completeDeploymentID" }   function UpgradeDeployment() { write-progress -id 3 -activity "Upgrading Deployment" -Status "In progress" Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: In progress"   # perform Update-Deployment $setdeployment = Set-AzureDeployment -Upgrade -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service -Force   $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot $completeDeploymentID = $completeDeployment.deploymentid   write-progress -id 3 -activity "Upgrading Deployment" -completed -Status "Complete" Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: Complete, Deployment ID: $completeDeploymentID" }   Write-Output "Create Azure Deployment" Publish   Creating the TeamCity Build Step The only thing left is to create a second build step, after your MSBuild “Publish” step, with the build runner type “PowerShell”.  Then set your script to “Source Code,” the script execution mode to “Put script into PowerShell stdin with “-Command” arguments” and then copy/paste in the above script (replacing the placeholder sections with your values).  This should look like the following:   Wrap Up After combining the MSBuild /target:Publish step (which creates the necessary Windows Azure *.cspkg and *.cscfg files) and a PowerShell script step which utilizes the Azure PowerShell Cmdlets, we have a fully deployable build configuration in TeamCity.  You can configure this step to run whenever you’d like using build triggers – for example, you could even deploy whenever a new master branch deploy comes in and passes all required tests. In the script I’ve hardcoded that every deployment goes to the Staging environment on Azure, but you could deploy straight to Production if you want to, or even setup a deployment configuration variable and set it as desired. After your TeamCity Build Configuration is complete, you’ll see something that looks like this: Whenever you click the “Run” button, all of your code will be compiled, published, and deployed to Windows Azure! One additional enormous benefit of automating the process this way is that you can easily deploy any specific source control changeset by clicking the little ellipsis button next to "Run.”  This will bring up a dialog like the one below, where you can select the last change to use for your deployment.  Since Azure Web Role deployments don’t have any rollback functionality, this is a critical feature.   Enjoy!

    Read the article

  • System.OutOfMemoryException on file download

    - by frosty
    I have an ashx handler with the following code. The idea is to hide the path of the file and prompt a download context.Response.Clear(); context.Response.AddHeader("Content-Disposition", "attachment; filename=" + fileName); context.Response.AddHeader("Content-Length", file.Length.ToString()); context.Response.ContentType = "application/octet-stream"; context.Response.WriteFile(file.FullName); This works fine for some files however on others i get Exception of type 'System.OutOfMemoryException' was thrown.

    Read the article

  • Why would autoconf/automake project link against installed library instead of local development libr

    - by Beau Simensen
    I'm creating a library libgdata that has some tests and non-installed programs. I am running into the problem that once I've installed the library once, the programs seem to be linking to the installed version and not the local version in ../src/libgdata.la any longer. What could cause this? Am I doing something horribly wrong? Here is what my test/Makefile.am looks like: INCLUDES = -I$(top_srcdir)/src/ -I$(top_srcdir)/test/ # libapiutil contains all of our dependencies! AM_CXXFLAGS = $(APIUTIL_CFLAGS) AM_LDFLAGS = $(APIUTIL_LIBS) LDADD = $(top_builddir)/src/libgdata.la noinst_PROGRAMS = gdatacalendar gdatayoutube gdatacalendar_SOURCES = gdatacalendar.cc gdatayoutube_SOURCES = gdatayoutube.cc TESTS = check_bare check_PROGRAMS = $(TESTS) check_bare_SOURCES = check_bare.cc (libapiutil is another library that has some helper stuff for dealing with libcurl and libxml++) So, for instance, if I run the tests without having installed anything, everything works fine. I can make changes locally and they are picked up by these programs right away. If I install the package, these programs will compile (it seems like it does actually look locally for the headers), but once I run the program it complains about missing symbols. As far as I can tell, it is linking against the newly built library (../src/libgdata.la) based on the make output, so I'm not sure why this would be happening. If i remove the installed files, the local changes to src/* are picked up just fine. I've included the make output for gdatacalendar below. g++ -DHAVE_CONFIG_H -I. -I.. -I../src/ -I../test/ -I/home/altern8/workspaces/4355/dev-install/include -I/usr/include/libxml++-2.6 -I/usr/lib/libxml++-2.6/include -I/usr/include/libxml2 -I/usr/include/glibmm-2.4 -I/usr/lib/glibmm-2.4/include -I/usr/include/sigc++-2.0 -I/usr/lib/sigc++-2.0/include -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -g -O2 -MT gdatacalendar.o -MD -MP -MF .deps/gdatacalendar.Tpo -c -o gdatacalendar.o gdatacalendar.cc mv -f .deps/gdatacalendar.Tpo .deps/gdatacalendar.Po /bin/bash ../libtool --tag=CXX --mode=link g++ -I/home/altern8/workspaces/4355/dev-install/include -I/usr/include/libxml++-2.6 -I/usr/lib/libxml++-2.6/include -I/usr/include/libxml2 -I/usr/include/glibmm-2.4 -I/usr/lib/glibmm-2.4/include -I/usr/include/sigc++-2.0 -I/usr/lib/sigc++-2.0/include -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -g -O2 -L/home/altern8/workspaces/4355/dev-install/lib -lapiutil -lcurl -lgssapi_krb5 -lxml++-2.6 -lxml2 -lglibmm-2.4 -lgobject-2.0 -lsigc-2.0 -lglib-2.0 -o gdatacalendar gdatacalendar.o ../src/libgdata.la mkdir .libs g++ -I/home/altern8/workspaces/4355/dev-install/include -I/usr/include/libxml++-2.6 -I/usr/lib/libxml++-2.6/include -I/usr/include/libxml2 -I/usr/include/glibmm-2.4 -I/usr/lib/glibmm-2.4/include -I/usr/include/sigc++-2.0 -I/usr/lib/sigc++-2.0/include -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -g -O2 -o .libs/gdatacalendar gdatacalendar.o -L/home/altern8/workspaces/4355/dev-install/lib /home/altern8/workspaces/4355/dev-install/lib/libapiutil.so /usr/lib/libcurl.so -lgssapi_krb5 /usr/lib/libxml++-2.6.so /usr/lib/libxml2.so /usr/lib/libglibmm-2.4.so /usr/lib/libgobject-2.0.so /usr/lib/libsigc-2.0.so /usr/lib/libglib-2.0.so ../src/.libs/libgdata.so -Wl,--rpath -Wl,/home/altern8/workspaces/4355/dev-install/lib creating gdatacalendar Help. :) UPDATE I get the following messages when I try to run the calendar program when I've added the addCommonRequestHeader() method to the Service class after I had installed the library without the addCommonRequestHeader() method. /home/altern8/workspaces/4355/libgdata/test/.libs/lt-gdatacalendar: symbol lookup error: /home/altern8/workspaces/4355/libgdata/test/.libs/lt-gdatacalendar: undefined symbol: _ZN55gdata7service7Service22addCommonRequestHeaderERKSsS4_ Eugene's suggestion to try setting the $LD_LIBRARY_PATH variable did not help. UPDATE 2 I did two tests. First, I did this after blowing away my dev-install directory (--prefix) and in that case, it creates test/.libs/lt-gdatacalendar. Once I have installed the library, though, it creates test/.libs/gdatacalendar instead. The output of ldd is the same for both with one exception: # before install # ldd test/.libs/lt-gdatacalendar libgdata.so.0 => /home/altern8/workspaces/4355/libgdata/src/.libs/libgdata.so.0 (0xb7c32000) # after install # ldd test/.libs/gdatacalendar libgdata.so.0 => /home/altern8/workspaces/4355/dev-install/lib/libgdata.so.0 (0xb7c87000) What would cause this to create lt-gdatacalendar in one case but gdatacalendar in another? The output of ldd on libgdata is: altern8@goldfrapp:~/workspaces/4355/libgdata$ ldd /home/altern8/workspaces/4355/libgdata/src/.libs/libgdata.so.0 linux-gate.so.1 => (0xb7f7c000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0xb7f3b000) libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb7dec000) /lib/ld-linux.so.2 (0xb7f7d000)

    Read the article

  • noClassDefFoundError using Scala Plugin for Eclipse

    - by Jacob Lyles
    I successfully implemented and ran several Scala tutorials in Eclipse using the Scala plugin. Then suddenly I tried to compile and run an example, and this error came up: Exception in thread "main" java.lang.NoClassDefFoundError: hello/HelloWorld Caused by: java.lang.ClassNotFoundException: hello.HelloWorld at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:315) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:330) at java.lang.ClassLoader.loadClass(ClassLoader.java:250) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:398) After this point I could no longer run any Scala programs in Eclipse. I tried cleaning and rebuilding my project, closing and reopening my project, and closing and reopening Eclipse. Eclipse version number 3.5.2 and Scala plugin 2.8.0 Here is the original code: package hello object HelloWorld { def main(args: Array[String]){ println("hello world") } }

    Read the article

  • How do I get rid of "API restriction UnitTestFramework.dll already loaded" error?

    - by Kevin Driedger
    The following error pops up every now and then: C:\Program Files\MSBuild\Microsoft\VisualStudio\v9.0\TeamTest\Microsoft.TeamTest.targets(14,5): error : API restriction: The assembly 'file:///C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\PublicAssemblies\Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll' has already loaded from a different location. It cannot be loaded from a new location within the same appdomain. How do I get rid of it?

    Read the article

  • Core Data iPad/iPhone BLOBS vs File system for 20k PDFs

    - by jamone
    I'm designing an iPad/iPhone app using core data. The main focus of the app is sorting and viewing up to 20,000 PDFs They are ~200KB each. Typically its best to not store BLOBS in a DB, but for desktop systems I've typically seen it said that if the blobs are < 1 MB then its fine to use the DB. Any considerations I should take into count? If I store them in the file system can I store them all in one directory and not have performance issues (I won't need to ever get a directory list since I'd store each's path in the DB)? Should I divide them among a handful of directories? If so is there a good rule on # of files per dir?

    Read the article

  • MySqlDateTime to System.DateTime conversion

    - by bowsa
    Ok I'm very new to databases and C# in general, but I'm using a piece of code that exports dataset data to an Excel file, and its taking issue with the date/time format. I'm using the MySQL connector so the rowtype is MySql.Data.Types.MySqlDateTime. Is there any quick way to convert it into System.DateTime so I can slot it straight into the case statement? Here's a link to the code I used, I copied it verbatim so I've not copied and pasted it here. It throws a MySql.Data.Types.MySqlDateTime not handled exception: Code Project Thanks in advance for any help.

    Read the article

  • Store value of os.system or os.popen

    - by chrissygormley
    Hello, I want to grep the error's out of a log file and save the value as an error. When I use: errors = os.system("cat log.txt | grep 'ERROR' | wc -l") I get the return code that the command worked or not. When I use: errors = os.popen("cat log.txt | grep 'ERROR' | wc -l") I get what the command is trying to do. When I run this in the command line I get 3 as thats how many errors there are. Can anyone suggest another way? Thanks

    Read the article

  • SQL Server 2008 Express doesn't see my Visual Studio 2008 Team System's SP1

    - by Kamilos
    Hi, I want to install SQL Server 2008 express. I have already Visual Studio 2008 Team System with SP1. VS in help about shows me: MVS version: 9.0.30729.1 SP .NET Framework version: 3.5 SP1 but installator of SQL Server shows me that Visual Studio doesn't have SP1. Anyway I tricked up him by change in win registry HKLM Software Microsoft DevDiv VS Servicing 9.0 IDE 1033 value from RTM on SP1 and instalation runs. But during instalation error was occured about SP1 again. SQL Server was installed without SQL Managment. When I try install it I have allways the same error about SP1. I was install SP1 couple times with success but it does nothing. I was instal SQL Server SP1 also but it does nothing. Reinstall of VS 2008 and SP1 does nothing. What can I do? Thanks for any help, Kamilos

    Read the article

  • How to update maven repository in eclipse?

    - by Stephane Grenier
    Assuming you're already using the m2eclipse plugin, what can you do it doesn't update the dependencies to the latest in your repo. For example, on the command line you can just add the -U flag as in: mvn clean install -U to force the dependencies to be updated. Is there something like this within eclipse since it doesn't always seem to pick up the latest updates.

    Read the article

  • Makefiles - Compile all .cpp files in src/ to .o's in obj/, then link to binary in /

    - by Austin Hyde
    So, my project directory looks like this: /project Makefile main /src main.cpp foo.cpp foo.h bar.cpp bar.h /obj main.o foo.o bar.o What I would like my makefile to do would be to compile all .cpp files in the /src folder to .o files in the /obj folder, then link all the .o files in /obj into the output binary in the root folder /project. The problem is, I have next to no experience with Makefiles, and am not really sure what to search for to accomplish this. Also, is this a "good" way to do this, or is there a more standard approach to what I'm trying to do?

    Read the article

  • What is "UseRANU" parameter in Visual Studio

    - by sudarsanyes
    I have created a package in VS2010 RC using the MPF (Managed Package Framework) and I get the following error. Can somebody help me out with this ?? The "UseRANU" parameter is not supported by the "VsTemplatePaths" task. Verify the parameter exists on the task, and it is a settable public instance property. The "VsTemplatePaths" task could not be initialized with its input parameters.

    Read the article

  • RSS parsing last build Date. Fastest way to do so please.

    - by Paul
    Dim myRequest As System.Net.WebRequest = System.Net.WebRequest.Create(url) Dim myResponse As System.Net.WebResponse = myRequest.GetResponse() Dim rssStream As System.IO.Stream = myResponse.GetResponseStream() Dim rssDoc As New System.Xml.XmlDocument() Try rssDoc.Load(rssStream) Catch nosupport As NotSupportedException Throw nosupport End Try Dim rssItems As System.Xml.XmlNodeList = rssDoc.SelectNodes("rss/channel") 'For i As Integer = 0 To rssItems.Count - 1 Dim rssDetail As System.Xml.XmlNode rssDetail = rssItems.Item(0).SelectSingleNode("lastBuildDate") Folks this is what I'm using to parse an RSS feed for the last updated time. Is there a quicker way? Speed seems to be a bit slow on it as it pulls down the entire feed before parsing.

    Read the article

  • ASP.net dealing issues with production system

    - by nettguy
    First time i am going to work on (maintenance project) application that is already in production.The application was developed in ASP.net 3.5/C# 3.0(web forms) with jQuery,Ajax,Sql server 2005 and microsoft enterprise library 4.0.,WCF services. Questions (bear with me if my question is wrong) 1) Is it possible to use Visual Studio (2008 with SP1) to debug the remote application (i.e already in production)?.What are the tools do i need to use in order to keep track the things in case something went wrong? 2) Simply looking into Log file ,will solve the issues? 3) After having done with enhancements,is it possible to directly deploy the DLLs into production server.Won't it affect the running application? Please guide me what are the procedures i need to follow.Client is ready to provide any tools for my support.(What are the area do i need to aware to handle production system) Thanks in advance.

    Read the article

  • LibPNG + Boost::GIL: png_infopp_NULL not found

    - by Viet
    Hi, I always get this error when trying to compile my file with Boost::GIL PNG IO support: (I'm running Mac OS X Leopard and Boost 1.42, LibPNG 1.4) /usr/local/include/boost/gil/extension/io/png_io_private.hpp: In member function 'void boost::gil::detail::png_reader::init()': /usr/local/include/boost/gil/extension/io/png_io_private.hpp:155: error: 'png_infopp_NULL' was not declared in this scope /usr/local/include/boost/gil/extension/io/png_io_private.hpp:160: error: 'png_infopp_NULL' was not declared in this scope /usr/local/include/boost/gil/extension/io/png_io_private.hpp: In destructor 'boost::gil::detail::png_reader::~png_reader()': /usr/local/include/boost/gil/extension/io/png_io_private.hpp:174: error: 'png_infopp_NULL' was not declared in this scope /usr/local/include/boost/gil/extension/io/png_io_private.hpp: In member function 'void boost::gil::detail::png_reader::apply(const View&)': /usr/local/include/boost/gil/extension/io/png_io_private.hpp:186: error: 'int_p_NULL' was not declared in this scope /usr/local/include/boost/gil/extension/io/png_io_private.hpp: In member function 'void boost::gil::detail::png_reader_color_convert<CC>::apply(const View&)': /usr/local/include/boost/gil/extension/io/png_io_private.hpp:228: error: 'int_p_NULL' was not declared in this scope /usr/local/include/boost/gil/extension/io/png_io_private.hpp: In member function 'void boost::gil::detail::png_writer::init()': /usr/local/include/boost/gil/extension/io/png_io_private.hpp:317: error: 'png_infopp_NULL' was not declared in this scope

    Read the article

  • Semantic Grid System, Media Query issue

    - by Andy
    I'm using the Semantic Grid System to build a responsive site. However, something isn't quite right with the media queries that should obviously kick in once it hits a particular screen size. I'll reference what i mean with their example on the website : if I view this on my iPhone for example, given that it is supposed to adjust to a single column structure on a mobile device, it still throws out the web version of the page. That is true for both Safari and Chrome on my iPhone. However, if I use the RWD bookmarklet to check it's appearance at different resolutions it appears as expected for the mobile resolution. Also, ironically, if I resize the page in Safari on my desktop it also adjusts accordingly once I get down to the approriate screen size, but not in Firefox. The media query that it uses once it hits 720px is @media screen and (max-width: 720px) { #maincolumn, #sidebar { .column(12); margin-bottom: 1em; } } and I might be wide of the mark here but I think that must be the issue. But given that this is directly from the semantic.gs website I'm not inclined to question their own code. Any idea what the problem might be?

    Read the article

  • To call SelectMany dynamically in the way of System.Linq.Dynamic

    - by user341127
    In System.Linq.Dynamic, there are a few methods to form Select, Where and other Linq statements dynamically. But there is no for SelectMany. The method for Select is as the following: public static IQueryable Select(this IQueryable source, string selector, params object[] values) { if (source == null) throw new ArgumentNullException("source"); if (selector == null) throw new ArgumentNullException("selector"); LambdaExpression lambda = DynamicExpression.ParseLambda(source.ElementType, null, selector, values); IQueryable result = source.Provider.CreateQuery( Expression.Call( typeof(Queryable), "Select", new Type[] { source.ElementType, lambda.Body.Type }, source.Expression, Expression.Quote(lambda))); return result; } I tried to modify the above code, after hours working, I couldn't find a way out. Any suggestions are welcome. Ying

    Read the article

  • Reporting System architecture for better performance

    - by pauloya
    Hi, We have a product that runs Sql Server Express 2005 and uses mainly ASP.NET. The database has around 200 tables, with a few (4 or 5) that can grow from 300 to 5000 rows per day and keep a history of 5 years, so they can grow to have 10 million rows. We have built a reporting platform, that allows customers to build reports based on templates, fields and filters. We face performance problems almost since the beginning, we try to keep reports display under 10 seconds but some of them go up to 25 seconds (specially on those customers with long history). We keep checking indexes and trying to improve the queries but we get the feeling that there's only so much we can do. Off course the fact that the queries are generated dynamically doesn't help with the optimization. We also added a few tables that keep redundant data, but then we have the added problem of maintaining this data up to date, and also Sql Express has a limit on the size of databases. We are now facing a point where we have to decide if we want to give up real time reports, or maybe cut the history to be able to have better performance. I would like to ask what is the recommended approach for this kind of systems. Also, should we start looking for third party tools/platforms? I know OLAP can be an option but can we make it work on Sql Server Express, or at least with a license that is cheap enough to distribute to thousands of deployments? Thanks

    Read the article

  • building SQL Query From another Query in php

    - by Nina
    Hello when I Try to built Query from another Query in php code I Faced some problem can you tell me why? :( code : $First="SELECT ro.RoomID,ro.RoomName,ro.RoomLogo,jr.RoomID,jr.MemberID,ro.RoomDescription FROM joinroom jr,rooms ro where (ro.RoomID=jr.RoomID)AND jr.MemberID = '1' "; $sql1 = mysql_query($First); $constract .= "ro.RoomName LIKE '%$search_each%'"; $constract="SELECT * FROM $sql1 WHERE $constract ";// This statment is Make error

    Read the article

  • Creating DescriptionAttribute on Enumeration Field using System.Reflection.Emit

    - by Manish Sinha
    I have a list of strings which are candidates for Enumerations values. They are Don't send diffs 500 lines 1000 lines 5000 lines Send entire diff The problem is that spaces, special characters are not a part of identifiers and even cannot start with a number, so I would be sanitizing these values to only chars, numbers and _ To keep the original values I thought of putting these strings in the DescriptionAttribute, such that the final Enum should look like public enum DiffBehvaiour { [Description("Don't send diffs")] Dont_send_diffs, [Description("500 lines")] Diff_500_lines, [Description("1000 lines")] Diff_1000_lines, [Description("5000 lines")] Diff_5000_lines, [Description("Send entire diff")] Send_entire_diff } Then later using code I will retrieve the real string associated with the enumeration value, so that the correct string can be sent back the web service to get the correct resource. I want to know how to create the DescriptionAttribute using System.Reflection.Emit Basically the question is where and how to store the original string so that when the Enumeration value is chosen, the corresponding value can be retrieved. I am also interested in knowing how to access DescriptionAttribute when needed.

    Read the article

  • Shark was unable to find symbol information for this address range - iPhone

    - by Elliot
    I'm trying to use Shark to determine which method(s) are taking the most time in my iPhone app. After sampling, I get this: Clicking the "!" button yields: Shark was unable to find symbol information for this address range. Typically this happens because the application was compiled without symbols or they have been subsequently stripped away. In Xcode, make sure the "Generate Debug Symbols" checkbox is selected (passes the -g flag to the compiler). Note that this does not affect code optimization, and does not typically alter performance significantly. However, the extra symbol information does consume significantly more space and may bloat the size of the executable. But I AM using the Debug option, and I am running on my Device. And Generate Debug Symbols IS checked. So what's wrong?

    Read the article

  • System.Net.WebProxy in .NET Client

    - by NealWalters
    I don't yet have a good understanding of how the network is set up at my current client, but here is my issue. The vendor has a normal URL, but it might be a leased line. I have a .NET program that calls an external .asmx service. If I go to IE, Connections, LAN Settings, and uncheck "Automatic detect settings" and uncheck "Use Automatic Configuration Sript", then the .NET program runs and access the web service fine. But then, to access internet again from the browser, I have to re-check the two boxes, and so on throughout the day while we are testing and browsing. I'm hoping to put some code like this in the .NET program to avoid us having to constantly change: if (chkUseProxy.Checked) { myclient.Proxy = new System.Net.WebProxy(""); } I tried null and empty string and so far not able to connect without errors. Is this possible, and if so, what might be the correct parms for the constructor? Or are there other objects or properties that would need to be set? Thanks, Neal Walters

    Read the article

  • Continous integration with .net and svn

    - by stiank81
    We're currently not applying the automated building and testing of continous integration in our project. We haven't bothered this far as we're only 2 developers working on it, but even with a team of 2 I still think it would be valuable to use continous integration and get a confirmation that our builds don't break or tests start failing. We're using .Net with C# and WPF. We have created Python-scripts for building the application - using MSbuild - and for running all tests. Our source is in SVN. What would be the best approach to apply continous integration with this setup? What tool should we get? It should be one which doesn't require alot of setup. Simple procedures to get started and little maintanance is a must.

    Read the article

< Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >