Search Results

Search found 35692 results on 1428 pages for 'build process'.

Page 338/1428 | < Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >

  • Building/Testing a Universal iPhone/iPad application

    - by psychotik
    I have a project configured (I think) to produce Universal binaries. The base SDK is set to 3.2 and the Deployment Target is set to 3.1. Target Device Family is iPhone/iPad and the architecture is armv6 armv7. I had a few questions about how this Universal binary thing really works: 1) When I want to submit an app binary for review, what configuration should I set as the build target? If I set it as "Device - 3.1" I get a warning which says "warning: building with Targeted Device Family" that includes iPad('1,2') requires building with the 3.2 or later SDK". However, if I build with SDK 3.2, will it still run on iPhones with OS 3.1? What's the right configuration for device and architecture (arm6/arm7)? 2) How do I test the scenario above (built with SDK 3.2, but installed on a device running OS 3.1)? If I build with SDK 3.2, when I try to install it on a phone with OS 3.1, I get an error saying that the phone's OS isn't updated. Thanks!

    Read the article

  • How to macro-ify ant targets?

    - by Jonas Byström
    I want to be able to have different targets doing nearly the same thing, as so: ant build <- this would be a normal (default) build ant safari <- building the safari target. The targets look like this: <target name="build" depends="javac" description="GWT compile to JavaScript"> <java failonerror="true" fork="true" classname="com.google.gwt.dev.Compiler"> <classpath> <pathelement location="src"/> <path refid="project.class.path"/> </classpath> <jvmarg value="-Xmx256M"/> <arg value="${lhs.target}"/> </java> </target> <target name="safari" depends="javac" description="GWT compile to Safari/JavaScript"> <java failonerror="true" fork="true" classname="com.google.gwt.dev.Compiler"> <classpath> <pathelement location="src"/> <path refid="project.class.path"/> </classpath> <jvmarg value="-Xmx256M"/> <arg value="${lhs.safari.target}"/> </java> </target> (Nevermind the first thought that strikes: throw out ant! That's not an option just yet.) I tried using macrodef, but got a strange error message (even though the message didn't imply it, it think it had to do with putting a target in sequential). I don't want to do ant -Dwhatever=nevermind. Any ideas?

    Read the article

  • linux new/delete, malloc/free large memory blocks

    - by brian_mk
    Hi folks, We have a linux system (kubuntu 7.10) that runs a number of CORBA Server processes. The server software uses glibc libraries for memory allocation. The linux PC has 4G physical memory. Swap is disabled for speed reasons. Upon receiving a request to process data, one of the server processes allocates a large data buffer (using the standard C++ operator 'new'). The buffer size varies depening upon a number of parameters but is typically around 1.2G Bytes. It can be up to about 1.9G Bytes. When the request has completed, the buffer is released using 'delete'. This works fine for several consecutive requests that allocate buffers of the same size or if the request allocates a smaller size than the previous. The memory appears to be free'd ok - otherwise buffer allocation attempts would eventually fail after just a couple of requests. In any case, we can see the buffer memory being allocated and freed for each request using tools such as KSysGuard etc. The problem arises when a request requires a buffer larger than the previous. In this case, operator 'new' throws an exception. It's as if the memory that has been free'd from the first allocation cannot be re-allocated even though there is sufficient free physical memory available. If I kill and restart the server process after the first operation, then the second request for a larger buffer size succeeds. i.e. killing the process appears to fully release the freed memory back to the system. Can anyone offer an explanation as to what might be going on here? Could it be some kind of fragmentation or mapping table size issue? I am thinking of replacing new/delete with malloc/free and use mallopt to tune the way the memory is being released to the system. BTW - I'm not sure if it's relevant to our problem, but the server uses Pthreads that get created and destroyed on each processing request. Cheers, Brian.

    Read the article

  • Multi-level shop, xml or sql. best practice?

    - by danrichardson
    Hello, i have a general "best practice" question regarding building a multi-level shop, which i hope doesn't get marked down/deleted as i personally think it's quite a good "subjective" question. I am a developer in charge (in most part) of maintaining and evolving a cms system and associated front-end functionality. Over the past half year i have developed a multiple level shop system so that an infinite level of categories may exist down into a product level and all works fine. However over the last week or so i have questioned by own methods in front-end development and the best way to show the multi-level data structure. I currently use a sql server database (2000) and pull out all the shop levels and then process them into an enumerable typed list with child enumerable typed lists, so that all levels are sorted. This in my head seems quite process heavy, but we're not talking about thousands of rows, generally only 1-500 rows maybe. I have been toying with the idea recently of storing the structure in an XML document (as well as the database) and then sending last modified headers when serving/requesting the document for, which would then be processed as/when nessecary with an xsl(t) document - which would be processed server side. This is quite a handy, reusable method of storing the data but does it have more overheads in the fact im opening and closing files? And also the xml will require a bit of processing to pull out blocks of xml if for instance i wanted to show two level mid way through the tree for a side menu. I use the above method for sitemap purposes so there is currently already code i have built which does what i require, but unsure what the best process is to go about. Maybe a hybrid method which pulls out the data, sorts it and then makes an xml document/stream (XDocument/XmlDocument) for xsl processing is a good way? - This is the way i currently make the cms work for the shop. So really (and thanks for sticking with me on this), i am just wandering which methods other people use or recommend as being the best/most logical way of doing things. Thanks Dan

    Read the article

  • NAnt exec relative path

    - by stacker
    How can I assign to trunk.dir property a relative path to the trunk location? This is my nant.build file: <?xml version="1.0" encoding="utf-8"?> <project name="ProjectName" default="build" xmlns="http://nant.sf.net/release/0.85/nant.xsd"> <!-- Directories --> <property name="trunk.dir" value="C:\Projects\ProjectName" /><!-- I want relative path over here! --> <property name="source.dir" value="${trunk.dir}src\" /> <!-- Working Files --> <property name="msbuild.exe" value="C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\msbuild.exe" /> <property name="solution.sln" value="${source.dir}ProjectName.sln" /> <!-- Called Externally --> <target name="compile"> <!-- Rebuild foces msbuild to clean and build --> <exec program="${msbuild.exe}" commandline="${solution.sln} /t:Rebuild /v:q" /> </target> </project>

    Read the article

  • How to handle refunds or rebates via a payment processor?

    - by Tai Squared
    I need to handle online payments and am trying to choose a payment processor. One requirement is to handle refunds and rebates to the customer. These won't always be at the time of sale, and not for the entire amount of the purchase. Is this something all payment processors handle? I don't want to have to do this manually as there may be many rebates, and they may be for relatively small amounts. I see PayPal has a refund API, but other parts of their site talk about sending a refund within 60 days. Is this something also required by the API? Amazon FPS also has a refund API that seems a bit more flexible. The Google Checkout refund has an amout field, but it's unclear to me if you can do a partial refund as the description reads "The refund-order command instructs Google Checkout to refund the buyer for a particular order." What are some things to look out for when looking for a payment processor that can handle rebates and refunds? Is there always a time limit in issuing these refunds? Is using a merchant account better for this type of process? I was hoping to avoid that due to the increased cost and complexity, but would consider it if it meets all of my requirements. Update It appears the refund process is fairly simple and handled by all processors. Is there any additional information on rebates? I would like to avoid a process of sending live checks to customers, but I will have to send rebates in some small amounts that may be a few months after the initial purchase.

    Read the article

  • C# Serialization lock out

    - by Greycrow
    When I try to Serialize a class to an xml file I get the exception: The process cannot access the file 'C:\settings.xml' because it is being used by another process. Settings currentSettings = new Settings(); public void LoadSettings() { //Load Settings from XML file try { Stream stream = File.Open("settings.xml", FileMode.Open); XmlSerializer s = new XmlSerializer(typeof(Settings)); currentSettings = (Settings)s.Deserialize(stream); stream.Close(); } catch //Can't read XML - use default settings { currentSettings.Name = GameSelect.Items[0].ToString(); currentSettings.City = MapSelect.Items[0].ToString(); currentSettings.Country = RaceSelect.Items[0].ToString(); } } public void SaveSettings() { //Save Settings to XML file try { Stream stream = File.Open("settings.xml", FileMode.Create); XmlSerializer x = new XmlSerializer(typeof(Settings)); x.Serialize(stream, currentSettings); stream.Close(); } catch { MessageBox.Show("Unable to open XML File - File in use by other process"); } It appears that when I Deserialize it locks the file for writing back, even if I closed the stream. Thanks in advance.

    Read the article

  • Cookie add in the Global.asax warning in application log

    - by Ioxp
    In my Global.ASAX file i have the following: System.Web.HttpCookie isAccess = new System.Web.HttpCookie("IsAccess"); isAccess.Expires = DateTime.Now.AddDays(-1); isAccess.Value = ""; System.Web.HttpContext.Current.Response.Cookies.Add(isAccess); So every time this method this is logged in the application events as a warning: Event code: 3005 Event message: An unhandled exception has occurred. Event time: 5/25/2010 12:23:20 PM Event time (UTC): 5/25/2010 4:23:20 PM Event ID: c515e27a28474eab8d99720c3f5a8e90 Event sequence: 4148 Event occurrence: 332 Event detail code: 0 Application information: Application domain: /LM/W3SVC/2100509645/Root-1-129192259222289896 Trust level: Full Application Virtual Path: / Application Path: <PathRemoved>\www\ Machine name: TIPPER Process information: Process ID: 6936 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information: Exception type: NullReferenceException Exception message: Object reference not set to an instance of an object. Request information: Request URL: Request path: User host address: User: Is authenticated: False Authentication Type: Thread account name: NT AUTHORITY\NETWORK SERVICE Thread information: Thread ID: 7 Thread account name: NT AUTHORITY\NETWORK SERVICE Is impersonating: False Stack trace: at ASP.global_asax.Session_End(Object sender, EventArgs e) in <PathRemoved>\Global.asax:line 113 Any idea why this code would cause this error?

    Read the article

  • Cassandra hot keyspace structure change

    - by Pierre
    Hello. I'm currently running a 12-node Cassandra cluster storing 4TB of data, with a replication factor set to 3. For the needs of an application update, we need to change the configuration of our keyspace, and we'd like to avoid any downtime if possible. I read on a mailing list that the best way to do it is to: Kill cassandra process on one server of the cluster Start it again, wait for the commit log to be written on the disk, and kill it again Make the modifications in the storage.xml file Rename or delete the files in the data directories according to the changes we made Start cassandra Goto 1 with next server on the list My questions would be: Did I understand the process well? Is there any risk of data corruption? During the process, there will be servers with different versions of the storage.xml file in the same cluser, same keyspace. Is it a problem? Same question as above if we not only add, rename and remove ColumnFamilies, but if we change the CompareWith parameter / transform an existing column family into a super one. Or do we need to change the name? Thank you for your answers. It's the first time I'll do this, and I'm a little bit scared.

    Read the article

  • Maven test dependency in multi module project

    - by user209947
    I use maven to build a multi module project. My module 2 depends on Module 1 src at compile scope and module 1 tests in test scope. Module 2 - <dependency> <groupId>blah</groupId> <artifactId>MODULE1</artifactId> <version>blah</version> <classifier>tests</classifier> <scope>test</scope> </dependency> This works fine. Say my module 3 depends on Module1 src and tests at compile time. Module 3 - <dependency> <groupId>blah</groupId> <artifactId>MODULE1</artifactId> <version>blah</version> <classifier>tests</classifier> <scope>complie</scope> </dependency> When I run mvn clean install, my build runs till module 3, fails at module 3 as it couldnt resolve the module 1 test dependency. Then I do a mvn install on module 3 alone, go back and run mvn install on my parent pom to make it build. How can i fix this?

    Read the article

  • ID3D10Device Memory Allocation Strategy and E_OUTOFMEMORY

    - by Buzz
    Hi,guys, I want to know more detail of memory allocation strategy in D3D10Device. Could you give me some help? First questions is: I know D3D10 has done some work on memory virtualization that means client don't need to consider where the buffer was reserved, GPU memory, AGP memory or Process system memory. Is this correct? Second question is: When I use ID3D10Device to CreateBuffer continuously, no matter what buffer desc type is, for example ID3D10Device::CreateBuffer( ... D3D10_USAGE_DEFAULT ... ); ID3D10Device::CreateBuffer( ... D3D10_USAGE_IMMUTABLE ... ); ID3D10Device::CreateBuffer( ... D3D10_USAGE_DYNAMIC ... ); ID3D10Device::CreateBuffer( ... D3D10_USAGE_STAGING ... ); etc, if CreateBuffer return error code "E_OUTOFMEMORY", does that mean process virtual memory is exhausted? And at this time, memory allocation on process default heap would also be failed? Thanks in advance!

    Read the article

  • Need help in reading callgrind output

    - by n179911
    Hi, I have run callgrind with my application like this "valgrind --tool=callgrind MyApplication" and then call 'callgrind_annotate --auto=yes ./callgrind.out.2489' I see output like 768,097,560 PROGRAM TOTALS -------------------------------------------------------------------------------- Ir file:function -------------------------------------------------------------------------------- 18,624,794 /build/buildd/eglibc-2.11.1/elf/dl-lookup.c:do_lookup_x [/lib/ld-2.11.1.so] 18,149,492 /src/js/src/jsgc.cpp:JS_CallTracer'2 [/src/firefox-debug-objdir/js/src/libmozjs.so] 16,328,897 /src/layout/style/nsCSSDataBlock.cpp:nsCSSExpandedDataBlock::DoAssertInitialState() [/src/firefox-debug-objdir/toolkit/library/libxul.so] 13,376,634 /build/buildd/eglibc-2.11.1/nptl/pthread_getspecific.c:pthread_getspecific [/lib/libpthread-2.11.1.so] 13,005,623 /build/buildd/eglibc-2.11.1/malloc/malloc.c:_int_malloc [/lib/libc-2.11.1.so] 10,404,453 ???:0x0000000000009190 [/usr/lib/libpangocairo-1.0.so.0.2800.0] 10,358,646 /src/xpcom/io/nsFastLoadFile.cpp:NS_AccumulateFastLoadChecksum(unsigned int*, unsigned char const*, unsigned int, int) [/src/firefox-debug-objdir/toolkit/library/libxul.so] 8,543,634 /src/js/src/jsscan.cpp:js_GetToken [/src/firefox-debug-objdir/js/src/libmozjs.so] 7,451,273 /src/xpcom/typelib/xpt/src/xpt_arena.c:XPT_ArenaMalloc [/src/firefox-debug-objdir/toolkit/library/libxul.so] 7,335,131 ???:g_type_check_instance_is_a [/usr/lib/libgobject-2.0.so.0.2400.0] I have a few questions: What does the number on the right mean? Does it mean it spend accumulative that long in calling the function on the right? How can I tell how many times that function has been called and Does that include the time spend in calling the functions called by that function? What does line with ??? mean? e.g. ???:0x0000000000009190 [/usr/lib/libpangocairo-1.0.so.0.2800.0] Thank you.

    Read the article

  • How to make a thread that runs at x:00 x:15 x:30 and x:45 do something different at 2:00.

    - by rmarimon
    I have a timer thread that needs to run at a particular moments of the day to do an incremental replication with a database. Right now it runs at the hour, 15 minutes past the hour, 30 minutes past the hour and 45 minutes past the hour. This is the code I have which is working ok: public class TimerRunner implements Runnable { private static final Semaphore lock = new Semaphore(1); private static final ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor(); public static void initialize() { long delay = getDelay(); executor.schedule(new TimerRunner(), delay, TimeUnit.SECONDS); } public static void destroy() { executor.shutdownNow(); } private static long getDelay() { Calendar now = Calendar.getInstance(); long p = 15 * 60; // run at 00, 15, 30 and 45 minutes past the hour long second = now.get(Calendar.MINUTE) * 60 + now.get(Calendar.SECOND); return p - (second % p); } public static void replicate() { if (lock.tryAcquire()) { try { Thread t = new Thread(new Runnable() { public void run() { try { // here is where the magic happens } finally { lock.release(); } } }); t.start(); } catch (Exception e) { lock.release(); } } else { throw new IllegalStateException("already running a replicator"); } } public void run() { try { TimerRunner.replicate(); } finally { long delay = getDelay(); executor.schedule(new TimerRunner(), delay, TimeUnit.SECONDS); } } } This process is started by calling TimerRunner.initialize() when a server starts and calling TimerRunner.destroy(). I have created a full replication process (as opposed to incremental) that I would like to run at a certain moment of the day, say 2:00am. How would change the above code to do this? I think that it should be very simple something like if it is now around 2:00am and it's been a long time since I did the full replication then do it now, but I can't get the if right. Beware that sometimes the replicate process takes way longer to complete. Sometimes beyond the 15 minutes, posing a problem in running at around 2:00am.

    Read the article

  • Adding a new target type to msbuild: How do I refer to the itemname in the task rules?

    - by jmucchiello
    I'm trying to add a task to build the COM proxy DLL after building the main DLL. So I created the following in a .target file: <Target Name="ProxyDLL" Inputs="$(IntDir)%(WHATGOESHERE)_i.c;$(IntDir)dlldata.c" Outputs="$(OutDir)%(WHATGOESHERE)ps.dll" AfterTargets="Link"> <CL Sources="$(IntDir)%(WHATGOESHERE)_i.c;$(IntDir)dlldata.c" /> </Target> And reference it from the .vcxproj file as <ItemGroup> <ProxyDLL Include="FTAccountant" /> </ItemGroup> So the FTAccountant.DLL file is created through the normal build process and then when attempts to compile the proxy stubs it creates these command lines: cl /c dir\_i.c dir\dlldata.c And of course it can't find _i.c. The first attempt, I put %(Filename) in the WHATGOESHERE space and I got this error: C:\ActivePay\Build\Proxy DLL.targets(6,3): error MSB4095: The item metadata %(Filename) is being referenced without an item name. Specify the item name by using %(itemname.Filename). So I changed it to %(itemname.Filename) and that is an empty string. How to get the value specified in the task's Include attribute and use it within the task?

    Read the article

  • Rails' page caching vs. HTTP reverse proxy caches

    - by John Topley
    I've been catching up with the Scaling Rails screencasts. In episode 11 which covers advanced HTTP caching (using reverse proxy caches such as Varnish and Squid etc.), they recommend only considering using a reverse proxy cache once you've already exhausted the possibilities of page, action and fragment caching within your Rails application (as well as memcached etc. but that's not relevant to this question). What I can't quite understand is how using an HTTP reverse proxy cache can provide a performance boost for an application that already uses page caching. To simplify matters, let's assume that I'm talking about a single host here. This is my understanding of how both techniques work (maybe I'm wrong): With page caching the Rails process is hit initially and then generates a static HTML file that is served directly by the Web server for subsequent requests, for as long as the cache for that request is valid. If the cache has expired then Rails is hit again and the static file is regenerated with the updated content ready for the next request With an HTTP reverse proxy cache the Rails process is hit when the proxy needs to determine whether the content is stale or not. This is done using various HTTP headers such as ETag, Last-Modified etc. If the content is fresh then Rails responds to the proxy with an HTTP 304 Not Modified and the proxy serves its cached content to the browser, or even better, responds with its own HTTP 304. If the content is stale then Rails serves the updated content to the proxy which caches it and then serves it to the browser If my understanding is correct, then doesn't page caching result in less hits to the Rails process? There isn't all that back and forth to determine if the content is stale, meaning better performance than reverse proxy caching. Why might you use both techniques in conjunction?

    Read the article

  • MSBuild script fails but produces no errors

    - by Kate
    I have a MSBuild script that I am executing through TeamCity. One of the tasks that is runs is from Xheo DeploxLX CodeVeil which obfuscates some DLLs. The task I am using is called VeilProject. I have run the CodeVeil Project through the interface manually and it works correctly, so I think I can safely assume that the actual obfuscate process is ok. This task used to take around 40 minutes and the rest of the MSBuild file executed perfectly and finished without errors. For some reason this task is now taking 1hr 20 minutes or so to execute. Once the VeilProject task is finished the output from the task says it completely successfully, however the MSBuild script fails at this point. I have a task directly after the VeilProject task and it does not get outputted. Using diagnostic output from MSBUild I can see the following: My questions are: Would it be possible that the MSBuild script has timed out? Once the task has completed it is after a certain timeout period so it just fails? Why would the build fail with no errors and no warnings? [05:39:06]: [Target "Obfuscate"] Finished. [05:39:06]: [Target "Obfuscate"] Saving exception map [05:49:21]: [Target "Obfuscate"] Ended at 11/05/2010 05:49:21, ~1 hour, 48 minutes, 6 seconds [05:49:22]: [Target "Obfuscate"] Done. [05:49:51]: MSBuild output: Ended at 11/05/2010 05:49:21, ~1 hour, 48 minutes, 6 seconds (TaskId:8) Done. (TaskId:8) Done executing task "VeilProject" -- FAILED. (TaskId:8) Done building target "Obfuscate" in project "AMK_Release.proj.teamcity.patch.tcprojx" -- FAILED.: (TargetId:12) Done Building Project "C:\Builds\Scripts\AMK_Release.proj.teamcity.patch.tcprojx" (All target(s)) -- FAILED. Project Performance Summary: 6535484 ms C:\Builds\Scripts\AMK_Release.proj.teamcity.patch.tcprojx 1 calls 6535484 ms All 1 calls Target Performance Summary: 156 ms PreClean 1 calls 266 ms SetBuildVersionNumber 1 calls 2406 ms CopyFiles 1 calls 6532391 ms Obfuscate 1 calls Task Performance Summary: 16 ms MakeDir 2 calls 31 ms TeamCitySetBuildNumber 1 calls 31 ms Message 1 calls 62 ms RemoveDir 2 calls 234 ms GetAssemblyIdentity 1 calls 2406 ms Copy 1 calls 6528047 ms VeilProject 1 calls Build FAILED. 0 Warning(s) 0 Error(s) Time Elapsed 01:48:57.46 [05:49:52]: Process exit code: 1 [05:49:55]: Build finished

    Read the article

  • Check for modification failure in content Integration using visualSvn sever and cruise control.net

    - by harun123
    I am using CruiseControl.net for continous integration. I've created a repository for my project using VisualSvn server (uses Windows Authentication). Both the servers are hosted in the same system (Os-Microsoft Windows Server 2003 sp2). When i force build the project using CruiseControl.net "Failed task(s): Svn: CheckForModifications" is shown as the message. When i checked the build report, it says as follows: BUILD EXCEPTION Error Message: ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: svn: OPTIONS of 'https://sp-ci.sbsnetwork.local:8443/svn/IntranetPortal/Source': **Server certificate verification failed: issuer is not trusted** (https://sp-ci.sbsnetwork.local:8443). Process command: C:\Program Files\VisualSVN Server\bin\svn.exe log **sameUrlAbove** -r "{2010-04-29T08:35:26Z}:{2010-04-29T09:04:02Z}" --verbose --xml --username ccnetadmin --password cruise --non-interactive --no-auth-cache at ThoughtWorks.CruiseControl.Core.Sourcecontrol.ProcessSourceControl.Execute(ProcessInfo processInfo) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Svn.GetModifications (IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModifications(ISourceControl sourceControl, IIntegrationResult lastBuild, IIntegrationResult thisBuild) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) My SourceControl node in the ccnet.config is as shown below: <sourcecontrol type="svn"> <executable>C:\Program Files\VisualSVN Server\bin\svn.exe</executable> <trunkUrl> check out url </trunkUrl> <workingDirectory> C:\ProjectWorkingDirectories\IntranetPortal\Source </workingDirectory> <username> ccnetadmin </username> <password> cruise </password> </sourcecontrol> Can any one suggest how to avoid this error?

    Read the article

  • SQL Server database change workflow best practices

    - by kubi
    The Background My group has 4 SQL Server Databases: Production UAT Test Dev I work in the Dev environment. When the time comes to promote the objects I've been working on (tables, views, functions, stored procs) I make a request of my manager, who promotes to Test. After testing, she submits a request to an Admin who promotes to UAT. After successful user testing, the same Admin promotes to Production. The Problem The entire process is awkward for a few reasons. Each person must manually track their changes. If I update, add, remove any objects I need to track them so that my promotion request contains everything I've done. In theory, if I miss something testing or UAT should catch it, but this isn't certain and it's a waste of the tester's time, anyway. Lots of changes I make are iterative and done in a GUI, which means there's no record of what changes I made, only the end result (at least as far as I know). We're in the fairly early stages of building out a data mart, so the majority of the changes made, at least count-wise, are minor things: changing the data type for a column, altering the names of tables as we crystallize what they'll be used for, tweaking functions and stored procs, etc. The Question People have been doing this kind of work for decades, so I imagine there have got to be a much better way to manage the process. What I would love is if I could run a diff between two databases to see how the structure was different, use that diff to generate a change script, use that change script as my promotion request. Is this possible? If not, are there any other ways to organize this process? For the record, we're a 100% Microsoft shop, just now updating everything to SQL Server 2008, so any tools available in that package would be fair game.

    Read the article

  • NOT A DUPLICATE! VS2010 - How to automatically stop compile on first compile error

    - by Ben Robbins
    {rant}First I'd like to say that this IS NOT A DUPLICATE. I've asked this question previously but it got closed as a duplicate when it isn't. This question is SPECIFIC to VS 2010 and the answers to the so-called duplicate work in VS 2008 but not in VS 2010 (at least not for me or anyone I know). So before you go closing something as a duplicate how about you read the question carefully and try the answer for yourself and see if it actually works. Apologies for the rant but there is no obvious way to contact the SO police that closed the issue or get it reopened. {/rant} At work we have a C# solution with over 80 projects. In VS 2008 we use a macro to stop the compile as soon as a project in the solution fails to build (see this question for several options for VS 2005 & VS 2008: http://stackoverflow.com/questions/134796/how-to-automatically-stop-visual-c-build-at-first-compile-error). Is it possible to do the same in VS 2010? What we have found is that in VS 2010 the macros don't work (at least I couldn't get them to work) as it appears that the environment events don't fire in VS 2010. The default behaviour is to continue as far as possible and display a list of errors in the error window. I'm happy for it to stop either as soon as an error is encountered (file-level) or as soon as a project fails to build (project-level). Answers for VS 2010 only please. If the macros do work then a detailed explanation of how to configure them for VS 2010 would be appreciated. Thanks.

    Read the article

  • Help a Beginner with a PHP based Login System

    - by Brian Lang
    I'm a bit embarrassed to say, but I've run into issue with creating a PHP based login system. I'm using a site template to handle the looks of the the login process, so I will spare you the code. Here is my thought process on how to handle the login: Create a simple login.php file. On there will be a form whose action is set to itself. It will check to see if the submit has been clicked, and if so validate to make sure the user entered a valid password / username. If they do, set a session variable save some login info (username, NOT password), and redirect them to a restricted area. If the login info isn't valid, save an error message in a session variable, display error message giving further instruction, and wait for the user to resubmit. Here is a chunk of what I have - hopefully one of you experts can see where I've gone wrong, and give me some insight: if(isset($_POST['submit'])) { if(!empty($_POST['username']) AND !empty(!$_POST['password'])) { header("Location: http://www.google.com"); } else { $err = 'All the fields must be filled in!'; } } if($err) { $_SESSION['msg']['login-err'] = $err; } ? Now the above is just an example - the intent of the above code is to process user input, with the script validating simply that the user has given input for username and password. If they have, I would like them, in this case, to be redirected to google.com (for the sake of this example). If not, save an error message. Given my current code, the error message will display perfectly, however if the user submits and has something entered for the username and password, the page simply doesn't redirect. I'm sure this is a silly question, but I am a beginner, and well, to be honest, a bit buzzed right now. Thanks so much!

    Read the article

  • How to cache pages using background jobs ?

    - by Alexandre
    Definitions: resource = collection of database records, regeneration = processing these records and outputting the corresponding html Current flow: Receive client request Check for resource in cache If not in cache or cache expired, regenerate Return result The problem is that the regeneration step can tie up a single server process for 10-15 seconds. If a couple of users request the same resource, that could result in a couple of processes regenerating the exact same resource simultaneously, each taking up 10-15 seconds. Wouldn't it be preferrable to have the frontend signal some background process saying "Hey, regenerate this resource for me". But then what would it display to the user? "Rebuilding" is not acceptable. All resources would have to be in cache ahead of time. This could be a problem as the database would almost be duplicated on the filesystem (too big to fit in memory). Is there a way to avoid this? Not ideal, but it seems like the only way out. But then there's one more problem. How to keep the same two processes from requesting the regeneration of a resource at the same time? The background process could be regenerating the resource when a frontend asks for the regeneration of the same resource. I'm using PHP and the Zend Framework just in case someone wants to offer a platform-specific solution. Not that it matters though - I think this problem applies to any language/framework. Thanks!

    Read the article

  • How do you fix issues with the debugger for the Android plug-in for Eclipse not attaching?

    - by user279112
    I have been trying to program something for the Android mobile phone, using Eclipse and the Android plug-in for that IDE, and my debugger used to attach just fine. But then it has suddenly started having consistent issues attaching. I just get that message about how the process is waiting for the debugger attach, and then it just won't. What determines whether the attachment glitches so seems to have something to do with what the code is that I'm trying to debug, as it seems to be drastically more of an issue with some versions of my code than with others (on the same app). How do I fix this? Now before you answer, please understand that I have researched this issue already. I have found a couple of solutions that have worked with other people, but which do not work for me. One of which is setting the debuggable property in the main manifest file as true, and the other is going into Dev Tools and into some settings menu, and from there selecting the process and essentially saying to the fake phone, "Debug this process". Neither has really worked. Any other ideas? And just in case...I've run into one blasted technical issue like this after another trying to program for that stupid phone. And I'm not the only one who's having these issues; when I go online to research these issues, it is always very easy for me to find many people who have the same issues, and who are having to use the shottiest, sloppiest, most "ghetto" solutions to work around these issues. I know that many people have created good applications for that phone, but I don't see how I'm supposed to do that when the SDK and the plug-in just don't work half the time. Does anybody know how I may put all this trash behind me, once and for all? Thanks for your answers to either question!

    Read the article

  • Replace .sln with MSBuild and wrap contained projects into targets

    - by Filburt
    I'd like to create a MSBuild project that reflects the project dependencies in a solution and wraps the VS projects inside reusable targets. The problem I like solve doing this is to svn-export, build and deploy a specific assembly (and its dependencies) in an BizTalk application. My question is: How can I make the targets for svn-exporting, building and deploying reusable and also reuse the wrapped projects when they are built for different dependencies? I know it would be simpler to just build the solution and deploy only the assemblies needed but I'd like to reuse the targets as much as possible. The parts The project I like to deploy <Project DefaultTargets="Deploy" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup> <ExportRoot Condition="'$(Export)'==''">Export</ExportRoot> </PropertyGroup> <Target Name="Clean_Export"> <RemoveDir Directories="$(ExportRoot)\My.Project.Dir" /> </Target> <Target Name="Export_MyProject"> <Exec Command="svn export svn://xxx/trunk/Biztalk2009/MyProject.btproj --force" WorkingDirectory="$(ExportRoot)" /> </Target> <Target Name="Build_MyProject" DependsOnTargets="Export_MyProject"> <MSBuild Projects="$(ExportRoot)\My.Project.Dir\MyProject.btproj" Targets="Build" Properties="Configuration=Release"></MSBuild> </Target> <Target Name="Deploy_MyProject" DependsOnTargets="Build_MyProject"> <Exec Command="BTSTask AddResource -ApplicationName:CORE -Source:MyProject.dll" /> </Target> </Project> The projects it depends upon look almost exactly like this (other .btproj and .csproj).

    Read the article

  • Naming multi-instance performance counters in .NET

    - by Roger Lipscombe
    Most multiple instance performance counters in Windows seem to automatically(?) have a #n on the end if there's more than one instance with the same name. For example: if, in Perfmon, you look under the Process category, you'll see: ... dwm explorer explorer#1 ... I have two explorer.exe processes, so the second counter has #1 appended to its name. When I attempt to do this in a .NET application: I can create the category, and register the instance (using the PerformanceCounterCategory.Create that takes a CounterCreationDataCollection). I can open the counter for write and write to it. When I open the counter a second time, it opens the same counter. This means that I have two applications fighting over the counters. The documentation for PerformanceCounter.InstanceName states that # is not allowed in the name. So: how do I have multiple-instance performance counters that are actually multiple instance? And where the second (and subsequent) instances get #n appended to the name? That is: I know that I can put the process ID (e.g.) on the instance name. This works, but has the unfortunate side effect that restarting the process results in a new PID, and Perfmon continues monitoring the old counter.

    Read the article

  • iphone crash log with dSym not loading debug information

    - by AngeDeLaMort
    Hello, I was trying to see why my application crashed on the device (iPhone) using the dSym generated along the executable (in ad hoc), but I don't know why, there isn't any useful information. It seems that "Organizer" is able to find the appropriate dSym and translate some data into more readable one, but when it comes to my application, I just have an address. Since I know how to reproduce it, I've tried to setup my build so it can help me in the future. So, I've tried to find if I had all the proper flags set int the project build properties and everything seems fine. So after doing some research, it seems that all information are stripped during link time and the dSym seems completely useless. I've played with some flags, but nothing changed. So, is there something special to do in order to get the crash file human readable? Or is it impossible in the ad hoc setting? The closest thing near to work that I've done was to build a debug version and look up the address in it. At least it seems to give the right file. So, I made a sample app and here what I have: (the line I want is #4): Thread 0 Crashed: 0 libobjc.A.dylib 0x00003ebc objc_msgSend + 20 1 UIKit 0x0005c970 -[UIView dealloc] + 60 2 UIKit 0x0005c840 -[UIImageView dealloc] + 76 3 CoreFoundation 0x0003963a -[NSObject release] + 28 4 MyApplication 0x000046a6 0x1000 + 13990 5 UIKit 0x00069750 -[UIViewController view] + 44 6 MyApplication 0x000053fa 0x1000 + 17402 The crash is made using 2 successive releases on an object. Thanks in advance.

    Read the article

< Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >