Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 1565/1620 | < Previous Page | 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572  | Next Page >

  • Could not import Django settings into Google App Engine

    - by gkelsall
    Hello all you Google App Engine experts, I have used Django a little before but am new to Google App Engine and am trying to use it's development web server with Django for the first time. I don't know if this is relevent but I previously had Django 1.1 and Python 2.6 on my Windows XP and even though I have uninstalled Python 2.6 there is still a folder and entries in the registry. I have followed the instructions from Google but when I browse to the GAE developemnt web server it cannot find my settings (details below). Any hints gratefully received. Regards Geoff C:\Documents and Settings\GeoffK\My Documents\ing\ingsite>echo %PATH% C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS \system32\WindowsPowerShell\v1.0;;C:\Python25;C:\Python25\Lib\site- packages\django\bin;C:\Documents and Settings\GeoffK\My Documents\ing \ingsite;C:\Program Files\Google\google_appengine\ C:\Documents and Settings\GeoffK\My Documents\ing\ingsite>echo %PYTHONPATH% C:\Documents and Settings\GeoffK\My Documents\ing\ingsite C:\Documents and Settings\GeoffK\My Documents\ing\ingsite>C:\Documents and Settings\GeoffK\My Documents\ing\ingsite>dev_appserver.py -- debug_imports ingiliz\ INFO 2009-08-04 07:29:45,328 appengine_rpc.py:157] Server: appengine.google. com INFO 2009-08-04 07:29:45,358 appcfg.py:322] Checking for updates to the SDK. INFO 2009-08-04 07:29:45,578 appcfg.py:336] The SDK is up to date. WARNING 2009-08-04 07:29:45,578 datastore_file_stub.py:404] Could not read data store data from c:\docume~1\geoffk\locals~1\temp \dev_appserver.datastore WARNING 2009-08-04 07:29:45,578 datastore_file_stub.py:404] Could not read data store data from c:\docume~1\geoffk\locals~1\temp \dev_appserver.datastore.history WARNING 2009-08-04 07:29:45,608 dev_appserver.py:3296] Could not initialize ima ges API; you are likely missing the Python "PIL" module. ImportError: No module named _imaging INFO 2009-08-04 07:29:45,625 dev_appserver_main.py:465] Running application ingiliz on port 8080: http://localhost:8080 ..... Now attempting to browse if need more detail here I can post ..... if not settings.DATABASE_ENGINE: File "C:\Python25\lib\site-packages\django\conf\__init__.py", line 28, in __ge tattr__ self._import_settings() File "C:\Python25\lib\site-packages\django\conf\__init__.py", line 59, in _imp ort_settings self._target = Settings(settings_module) File "C:\Python25\lib\site-packages\django\conf\__init__.py", line 94, in __in it__ raise ImportError, "Could not import settings '%s' (Is it on sys.path? Does it have syntax errors?): %s" % (self.SETTINGS_MODULE, e) ImportError: Could not import settings 'settings' (Is it on sys.path? Does it ha ve syntax errors?): No module named settings INFO 2009-08-04 07:31:02,187 dev_appserver.py:2982] "GET / HTTP/ 1.1" 500 -

    Read the article

  • <msbuild/> task fails while <devenv/> succeeds for MFC application in CruiseControl.NET?

    - by ee
    The Overview I am working on a Continuous Integration build of a MFC appliction via CruiseControl.net and VS2010. When building my .sln, a "Visual Studio" CCNet task (<devenv/>) works, but a simple MSBuild wrapper script (see below) run via the CCNet <msbuild/> task fails with errors like: error RC1015: cannot open include file 'winres.h'.. error C1083: Cannot open include file: 'afxwin.h': No such file or directory error C1083: Cannot open include file: 'afx.h': No such file or directory The Question How can I adjust the build environment of my msbuild wrapper so that the application builds correctly? (Pretty clearly the MFC paths aren't right for the msbuild environment, but how do i fix it for MSBuild+VS2010+MFC+CCNet?) Background Details We have successfully upgraded an MFC application (.exe with some MFC extension .dlls) to Visual Studio 2010 and can compile the application without issue on developer machines. Now I am working on compiling the application on the CI server environment I did a full installation of VS2010 (Professional) on the build server. In this way, I knew everything I needed would be on the machine (one way or another) and that this would be consistent with developer machines. VS2010 is correctly installed on the CI server, and the devenv task works as expected I now have a wrapper MSBuild script that does some extended version processing and then builds the .sln for the application via an MSBuild task. This wrapper script is run via CCNet's MSBuild task and fails with the above mentioned errors The Simple MSBuild Wrapper <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Target Name="Build"> <!-- Doing some versioning stuff here--> <MSBuild Projects="target.sln" Properties="Configuration=ReleaseUnicode;Platform=Any CPU;..." /> </Target> </Project> My Assumptions This seems to be a missing/wrong configuration of include paths to standard header resources of the MFC persuasion I should be able to coerce the MSBuild environment to consider the relevant resource files from my VS2010 install and have this approach work. Given the vs2010 msbuild support for visual c++ projects (.vcxproj), shouldn't msbuilding a solution be pretty close to compiling via visual studio? But how do I do that? Am I setting Environment variables? Registry settings? I can see how one can inject additional directories in some cases, but this seems to need a more systemic configuration at the compiler defaults level. Update 1 This appears to only ever happen in two cases: resource compilation (rc.exe), and precompiled header (stdafx.h) compilation, and only for certain projects? I was thinking it was across the board, but indeed it appears only to be in these cases. I guess I will keep digging and hope someone has some insight they would be willing to share...

    Read the article

  • Dissertation about website and database security - in need of some pointers

    - by ClarkeyBoy
    Hi, I am on my dissertation in my final year at university at the moment. One of the areas I need to research is security - for both websites and for databases. I currently have sections on the following: Website Form security - such as data validation. This section is more about preventing errors made by legitimate users as much as possible rather than stopping hackers, for example comparing a field to a regular expression and giving them meaningful feedback on any errors which did occur so as to stop it happening again. Constraints. For example if a value must be true or false then use a checkbox. If it is likely to be one of several values then use a dropdown or a set of radio boxes, and so on. If the value is unpredictable then use regular expressions to limit what characters they are allowed to enter, and to restrict the length of the string, and sometimes to limit the format (such as for dates / times, post codes and so on). Sometimes you can limit permissions to the form. This is on the occasion that you know exactly who (whether it be peoples names or a group of people - such as administrators or employees) is going to need access to the form. Restricting permissions will stop members of the public from being able to access the form. Symbols or strings which could be used maliciously or cause the website to act incorrectly (such as the script tag) should be filtered out or html encoded. Captcha images can be used to prevent automated systems from filling in and submitting the form. There are some hacks for file uploads - such as using double extensions - which can allow hackers to upload malicious files. Databases (this is nowhere near done yet but the sections I have planned are listed below) SQL statements vs stored procedures Throwing an error when one of the variables contains particular characters or groups of characters (I cant remember what characters they are, but I have seen a message thrown back at me before where I have tried to enter html or something into a text area). SQL Injection - and ways around it, with some examples. Does anyone have any hints and tips on where I could go for some decent, reliable information either about these areas or about other areas of security that I could cover? Thanks in advance. Regards, Richard PS I am a complete newbie when it comes to security, so please be patient with me. If any of the information I have put down is wrong or could be sub-sectioned then please feel free to say so.

    Read the article

  • why jQuery.parseJSON is not a function?

    - by Pandiya Chendur
    I use the following jquery statements and i am getting the error, jQuery.parseJSON is not a function my function is, function Iteratejsondata() {var HfJsonValue = { "Table": [{ "Emp_Id": "3", "Identity_No": "", "Emp_Name": "Jerome", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Supervisior", "Desig_Description": "Supervisior of the Construction", "SalaryBasis": "Monthly", "FixedSalary": "25000.00" }, { "Emp_Id": "4", "Identity_No": "", "Emp_Name": "Mohan", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Acc ", "Desig_Description": "Accountant", "SalaryBasis": "Monthly", "FixedSalary": "200.00" }, { "Emp_Id": "5", "Identity_No": "", "Emp_Name": "Murugan", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Mason", "Desig_Description": "Mason", "SalaryBasis": "Weekly", "FixedSalary": "150.00" }, { "Emp_Id": "6", "Identity_No": "", "Emp_Name": "Ram", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Mason", "Desig_Description": "Mason", "SalaryBasis": "Weekly", "FixedSalary": "120.00" }, { "Emp_Id": "7", "Identity_No": "", "Emp_Name": "Raja", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Mason", "Desig_Description": "Mason", "SalaryBasis": "Weekly", "FixedSalary": "135.00" }, { "Emp_Id": "8", "Identity_No": "", "Emp_Name": "Raja kumar", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Mason Helper", "Desig_Description": "Mason Helper", "SalaryBasis": "Weekly", "FixedSalary": "105.00" }, { "Emp_Id": "9", "Identity_No": "", "Emp_Name": "Lakshmi", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Mason Helper", "Desig_Description": "Mason Helper", "SalaryBasis": "Weekly", "FixedSalary": "100.00" }, { "Emp_Id": "10", "Identity_No": "", "Emp_Name": "Palani", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Carpenter", "Desig_Description": "Carpenter", "SalaryBasis": "Weekly", "FixedSalary": "200.00" }, { "Emp_Id": "11", "Identity_No": "", "Emp_Name": "Annamalai", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Carpenter", "Desig_Description": "Carpenter", "SalaryBasis": "Weekly", "FixedSalary": "220.00" }, { "Emp_Id": "12", "Identity_No": "", "Emp_Name": "David", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Steel Fixer", "Desig_Description": "Steel Fixer", "SalaryBasis": "Weekly", "FixedSalary": "220.00" }, { "Emp_Id": "13", "Identity_No": "", "Emp_Name": "Chandru", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Steel Fixer", "Desig_Description": "Steel Fixer", "SalaryBasis": "Weekly", "FixedSalary": "220.00" }, { "Emp_Id": "14", "Identity_No": "", "Emp_Name": "Mani", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Steel Helper", "Desig_Description": "Steel Helper", "SalaryBasis": "Weekly", "FixedSalary": "175.00" }, { "Emp_Id": "15", "Identity_No": "", "Emp_Name": "Karthik", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Wood Fixer", "Desig_Description": "Wood Fixer", "SalaryBasis": "Weekly", "FixedSalary": "195.00" }, { "Emp_Id": "16", "Identity_No": "", "Emp_Name": "Bala", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Wood Fixer", "Desig_Description": "Wood Fixer", "SalaryBasis": "Weekly", "FixedSalary": "185.00" }, { "Emp_Id": "17", "Identity_No": "", "Emp_Name": "Tamil arasi", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Wood Helper", "Desig_Description": "Wood Helper", "SalaryBasis": "Weekly", "FixedSalary": "185.00" }, { "Emp_Id": "18", "Identity_No": "", "Emp_Name": "Perumal", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Cook", "Desig_Description": "Cook", "SalaryBasis": "Weekly", "FixedSalary": "105.00" }, { "Emp_Id": "19", "Identity_No": "", "Emp_Name": "Andiappan", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Watchman", "Desig_Description": "Watchman", "SalaryBasis": "Weekly", "FixedSalary": "150.00"}] }; //var jsonObj = eval('(' + HfJsonValue + ')'); var jsonObj = jQuery.parseJSON(HfJsonValue); and my page looks like this <div id="Pagination" class="page-numbers"></div> <br style="clear:both;" /> <div id="Searchresult"></div> <div id="hiddenresult" style="display:none;"> </div> <script type="text/javascript"> var pagination_options = { num_edge_entries: 2, num_display_entries: 8, callback: pageselectCallback, items_per_page: 3 } function pageselectCallback(page_index, jq) { var items_per_page = pagination_options.items_per_page; var offset = page_index * items_per_page; var new_content = $('#hiddenresult div.resultsdiv').slice(offset, offset + items_per_page).clone(); $('#Searchresult').empty().append(new_content); return false; } function initPagination() { var num_entries = $('#hiddenresult div.resultsdiv').length; // Create pagination element $("#Pagination").pagination(num_entries, pagination_options); } $(document).ready(function() { Iteratejsondata(); initPagination(); }); </script> I ve inspected through firebug and saw all jquery files have been downloaded but why this is hapenning? Any suggestion....

    Read the article

  • iPhone / Objective-C: NSMutableArray writeToFile won't write to file. Always returns NO

    - by Joel
    I'm trying to serialize two NSMutableArrays of NSObjects that implement the NSCoding protocol. However it works for one (stacks) and not the other (cards). I have the following block of code: -(void) saveCards { NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString* documentsDirectory = [paths objectAtIndex:0]; NSString* cardsFile = [documentsDirectory stringByAppendingPathComponent:@"cards.state"]; NSString* stacksFile = [documentsDirectory stringByAppendingPathComponent:@"stacks.state"]; BOOL c = [rootStack.cards writeToFile:cardsFile atomically:YES]; BOOL s = [rootStack.stacks writeToFile:stacksFile atomically:YES]; } I step through this method using the debugger, and after the last two lines of code run, I check the values of the two BOOLs. BOOL c is NO and BOOL s is YES. The stacks array is actually empty (which is probably why it works). The cards array has contents. Why is it that the array with contents is failing? I can't figure this out. I've looked through numerous threads on SOF, each of them say the problem is because the protection level of the files they were writing were preventing them from writing. This is not my problem, as I'm writing to the Documents folder. I've double and tripple checked that neither rootStack.cards nor rootStack.stacks is nil. And I've checked that cards does indeed have content. Here are the coder methods for my Notecard class (I added all the if statments as part of trying to solve this problem to make sure trying to encode nil values doesn't break something): -(void) encodeWithCoder:(NSCoder *)encoder { if(text) [encoder encodeObject:text forKey:@"text"]; if(backText) [encoder encodeObject:backText forKey:@"backText"]; if(x) [encoder encodeObject:x forKey:@"x"]; if(y) [encoder encodeObject:y forKey:@"y"]; if(width) [encoder encodeObject:width forKey:@"width"]; if(height) [encoder encodeObject:height forKey:@"height"]; if(timeCreated) [encoder encodeObject:timeCreated forKey:@"timeCreated"]; if(audioManagerTicket) [encoder encodeObject:audioManagerTicket forKey:@"audioManagerTicket"]; if(backgroundColor) [encoder encodeObject:backgroundColor forKey:@"backgroundColor"]; } -(id) initWithCoder:(NSCoder *)decoder { self = [super init]; if(!self) return nil; self.text = [decoder decodeObjectForKey:@"text"]; self.backText = [decoder decodeObjectForKey:@"backText"]; self.x = [decoder decodeObjectForKey:@"x"]; self.y = [decoder decodeObjectForKey:@"y"]; self.width = [decoder decodeObjectForKey:@"width"]; self.height = [decoder decodeObjectForKey:@"height"]; self.timeCreated = [decoder decodeObjectForKey:@"timeCreated"]; self.audioManagerTicket = [decoder decodeObjectForKey:@"audioManagerTicket"]; self.backgroundColor = [decoder decodeObjectForKey:@"backgroundColor"]; return self; } each field is either an NSString, NSNumber, or UIColor. Thanks for any help

    Read the article

  • Quartz.Net Windows Service Configure Logging

    - by Tarun Arora
    In this blog post I’ll be covering, Logging for Quartz.Net Windows Service 01 – Why doesn’t Quartz.Net Windows Service log by default 02 – Configuring Quartz.Net windows service for logging to eventlog, file, console, etc 03 – Results: Logging in action If you are new to Quartz.Net I would recommend going through, A brief Introduction to Quartz.net Walkthrough of Installing & Testing Quartz.Net as a Windows Service Writing & Scheduling your First HelloWorld job with Quartz.Net   01 – Why doesn’t Quartz.Net Windows Service log by default If you are trying to figure out why… The Quartz.Net windows service isn’t logging The Quartz.Net windows service isn’t writing anything to the event log The Quartz.Net windows service isn’t writing anything to a file How do I configure Quartz.Net windows service to use log4Net How do I change the level of logging for Quartz.Net Look no further, This blog post should help you answer these questions. Quartz.NET uses the Common.Logging framework for all of its logging needs. If you navigate to the directory where Quartz.Net Windows Service is installed (I have the service installed in C:\Program Files (x86)\Quartz.net, you can find out the location by looking at the properties of the service) and open ‘Quartz.Server.exe.config’ you’ll see that the Quartz.Net is already set up for logging to ConsoleAppender and EventLogAppender, but only ‘ConsoleAppender’ is set up as active. So, unless you have the console associated to the Quartz.Net service you won’t be able to see any logging. <log4net> <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender"> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d [%t] %-5p %l - %m%n" /> </layout> </appender> <appender name="EventLogAppender" type="log4net.Appender.EventLogAppender"> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d [%t] %-5p %l - %m%n" /> </layout> </appender> <root> <level value="INFO" /> <appender-ref ref="ConsoleAppender" /> <!-- uncomment to enable event log appending --> <!-- <appender-ref ref="EventLogAppender" /> --> </root> </log4net> Problem: In the configuration above Quartz.Net Windows Service only has ConsoleAppender active. So, no logging will be done to EventLog. More over the RollingFileAppender isn’t setup at all. So, Quartz.Net will not log to an application trace log file. 02 – Configuring Quartz.Net windows service for logging to eventlog, file, console, etc Let’s change this behaviour by changing the config file… In the below config file, I have added the RollingFileAppender. This will configure Quartz.Net service to write to a log file. (<appender name="GeneralLog" type="log4net.Appender.RollingFileAppender">) I have specified the location for the log file (<arg key="configFile" value="Trace/application.log.txt"/>) I have enabled the EventLogAppender and RollingFileAppender to be written to by Quartz. Net windows service Changed the default level of logging from ‘Info’ to ‘All’. This means all activity performed by Quartz.Net Windows service will be logged. You might want to tune this back to ‘Debug’ or ‘Info’ later as logging ‘All’ will produce too much data to the logs. (<level value="ALL"/>) Since I have changed the logging level to ‘All’, I have added applicationSetting to remove logging log4Net internal debugging. (<add key="log4net.Internal.Debug" value="false"/>) <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="quartz" type="System.Configuration.NameValueSectionHandler, System, Version=1.0.5000.0,Culture=neutral, PublicKeyToken=b77a5c561934e089" /> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" /> <sectionGroup name="common"> <section name="logging" type="Common.Logging.ConfigurationSectionHandler, Common.Logging" /> </sectionGroup> </configSections> <common> <logging> <factoryAdapter type="Common.Logging.Log4Net.Log4NetLoggerFactoryAdapter, Common.Logging.Log4net"> <arg key="configType" value="INLINE" /> <arg key="configFile" value="Trace/application.log.txt"/> <arg key="level" value="ALL" /> </factoryAdapter> </logging> </common> <appSettings> <add key="log4net.Internal.Debug" value="false"/> </appSettings> <log4net> <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender"> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d [%t] %-5p %l - %m%n" /> </layout> </appender> <appender name="EventLogAppender" type="log4net.Appender.EventLogAppender"> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d [%t] %-5p %l - %m%n" /> </layout> </appender> <appender name="GeneralLog" type="log4net.Appender.RollingFileAppender"> <file value="Trace/application.log.txt"/> <appendToFile value="true"/> <maximumFileSize value="1024KB"/> <rollingStyle value="Size"/> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d{HH:mm:ss} [%t] %-5p %c - %m%n"/> </layout> </appender> <root> <level value="ALL" /> <appender-ref ref="ConsoleAppender" /> <appender-ref ref="EventLogAppender" /> <appender-ref ref="GeneralLog"/> </root> </log4net> </configuration>   Note – Please ensure you restart the Quartz.Net Windows service for the config changes to be picked up by the service   03 – Results: Logging in action Once you start the Quartz.Net Windows Service up, the logging should be initiated to write all activities in the Console, EventLog and File… See screen shots below… Figure – Quartz.Net Windows Service logging all activity to the event log Figure – Quartz.Net Windows Service logging all activity to the application log file Where is the output from log4Net ConsoleAppender? As a default behaviour, the console isn't available in windows services, web services, windows forms. The output will simply be dismissed. Unless you are running the process interactively. Which you can do by firing up Quartz.Server.exe –i to see the output   This was fourth in the series of posts on enterprise scheduling using Quartz.net, in the next post I’ll be covering troubleshooting why a scheduled task hasn’t fired on Quartz.net windows service. All Quartz.Net specific blog posts can listed here. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • Very slow compile times on Visual Studio

    - by johnc
    We are getting very slow compile times, which can take upwards of 20+ minutes on dual core 2GHz, 2G Ram machines. A lot of this is due to the size of our solution which has grown to 70+ projects, as well as VSS which is a bottle neck in itself when you have a lot of files. (swapping out VSS is not an option unfortunately, so I don't want this to descend into a VSS bash) We are looking at combing projects (not nice, as we like the separation of concerns, but is a good opportunity to refactor away some dead wood). We are also looking at having multiple solutions to achieve greater separation of concerns and quicker compile times for each element of the application. This I can see will become a dll hell as we try to keep things in synch. I am interested to know how other teams have dealt with this scaling issue, what do you do when your code base reaches a critical mass that you are wasting half the day watching the status bar deliver compile messages UPDATE Apologies, I neglected to mention this is a C# solution. Thanks for all the cpp suggestions, but it's been a few years since I've had to worry about headers. At a distance I say I miss C++, but I'm not sure I want to go back EDIT: Nice suggestions that have helped so far (not saying there aren't other nice suggestions below, just what has helped) New 3GHz laptop - the power of lost utilization works wonders when whinging to management Disable Anti Virus during compile 'Disconnecting' from VSS (actually the network) during compile - I may get us to remove VS-VSS integration altogether and stick to using the VSS UI Still not rip-snorting through a compile, but every bit helps. Orion did mention in a comment that generics may have a play also. From my tests there does appear to be a minimal performance hit, but not high enough to sure - compile times can be inconsistent due to disc activity. Due to time limitations, my tests didn't include as many Generics, or as much code, as would appear in live system, so that may accumulate. I wouldn't avoid using generics where they are supposed to be used, just for compile time performance WORKAROUND We are testing the practice of building new areas of the application in new solutions, importing in the latest dlls as required, them integrating them into the larger solution when we are happy with them. We may also do them same to existing code by creating temporary solutions that just encapsulate the areas we need to work on, and throwing them away after reintegrating the code. We need to weigh up the time it will take to reintegrate this code against the time we gain by not having Rip Van Winkle like experiences with rapid recompiling during development.

    Read the article

  • Overlay bitmap on live video

    - by sijith
    Hi i want to Overlay bitmap on live video. Iam trying to do this with the directshow sample. I edited PlayCapMonker sample and added some functions to enable this. i did this with the procedure explained in below link http://www.ureader.com/msg/1471251.aspx Now i am gettting errors Error 2 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 3 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 5 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 6 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 8 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 9 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 21 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 22 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 26 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 27 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 36 error C2228: left of '.m_alpha' must have class/struct/union Error 38 error C2227: left of '-SetAlphaBitmap' must point to class/struct/union/generic type Error 7 error C2146: syntax error : missing ';' before identifier 'Pool' Error 4 error C2146: syntax error : missing ';' before identifier 'Format' c:\Program Files\Microsoft Platform SDK\include\Vmr9.h 368 PlayCapMoniker Error 1 error C2143: syntax error : missing ';' before '' Error 20 error C2143: syntax error : missing ';' before '' Error 25 error C2143: syntax error : missing ';' before '*' Error 30 error C2065: 'g_pMixerBitmap' : undeclared identifier Error 33 error C2065: 'g_pMixerBitmap' : undeclared identifier Error 37 error C2065: 'g_pMixerBitmap' : undeclared identifier Error 31 error C2065: 'g_hbm' : undeclared identifier Error 32 error C2065: 'g_hbm' : undeclared identifier Error 35 error C2065: 'config' : undeclared identifier Error 10 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 11 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 12 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 13 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 16 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 19 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 23 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 24 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 28 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 29 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 14 error C2061: syntax error : identifier 'IDirect3DDevice9' Error 15 error C2061: syntax error : identifier 'IDirect3DDevice9' Error 17 error C2061: syntax error : identifier 'IDirect3DDevice9' Error 18 error C2061: syntax error : identifier 'IDirect3DDevice9' Error 34 error C2039: 'pDDS' : is not a member of '_VMR9AlphaBitmap' SDK\Samples\Multimedia\DirectShow\Capture\PlayCapMoniker\PlayCapMoniker.cpp 263 PlayCapMoniker

    Read the article

  • WiX unresolved reference error

    - by David
    I'm using Wix version 3.0.5419.0. I have two .wxs files, one which is a fragment, and another which uses the fragment to create the .msi file. Here is the file which uses the fragment (DaisyFarmer.wxs): <?xml version='1.0' encoding='windows-1252'?> <Wix xmlns='http://schemas.microsoft.com/wix/2006/wi' xmlns:iis='http://schemas.microsoft.com/wix/IIsExtension'> <Product Name='Daisy Web Site 1.0' Id='BB7FBBE4-0A25-4cc7-A39C-AC916B665220' UpgradeCode='8A5311DE-A125-418f-B0E1-5A30B9C667BD' Language='1033' Codepage='1252' Version='1.0.0' Manufacturer='the man'> <Package Id='5F341544-4F95-4e01-A2F8-EF74448C0D6D' Keywords='Installer' Description="desc" Manufacturer='the man' InstallerVersion='100' Languages='1033' Compressed='yes' SummaryCodepage='1252' /> <Media Id='1' Cabinet='Sample.cab' EmbedCab='yes' DiskPrompt="CD-ROM #1" /> <Property Id='DiskPrompt' Value="the man" /> <PropertyRef Id="NETFRAMEWORK35"/> <Condition Message='This setup requires the .NET Framework 3.5.'> <![CDATA[Installed OR (NETFRAMEWORK35)]]> </Condition> <Feature Id='DaisyFarmer' Title='DaisyFarmer' Level='1'> <ComponentRef Id='SchedulerComponent' /> </Feature> </Product> </Wix> The fragment I'm referencing is (Scheduler.wxs): <?xml version="1.0" encoding="utf-8"?> <Wix xmlns="http://schemas.microsoft.com/wix/2006/wi"> <Fragment> <DirectoryRef Id="TARGETDIR"> <Directory Id="dir2787390E4B7313EB8005DE08108EFEA4" Name="scheduler"> <Component Id="SchedulerComponent" Guid="{9254F7E1-DE41-4EE5-BC0F-BA668AF051CB}"> <File Id="fil9A013D0BFB837BAC71FED09C59C5501B" KeyPath="yes" Source="SourceDir\DTBookMonitor.exe" /> <File Id="fil4F0D8D05F53E6AFBDB498E7C75C2D98F" KeyPath="no" Source="SourceDir\DTBookMonitor.exe.config" /> <File Id="filF02F4686267D027CB416E044E8C8C2FA" KeyPath="no" Source="SourceDir\monitor.bat" /> <File Id="fil05B8FF38A3C85FE6C4A58CD6FDFCD2FB" KeyPath="no" Source="SourceDir\output.txt" /> <File Id="fil397F04E2527DCFDF7E8AC1DD92E48264" KeyPath="no" Source="SourceDir\pipelineOutput.txt" /> <File Id="fil83DFACFE7F661A9FF89AA17428474929" KeyPath="no" Source="SourceDir\process.bat" /> <File Id="fil2809039236E0072642C52C6A52AD6F2F" KeyPath="no" Source="SourceDir\README.txt" /> </Component> </Directory> </DirectoryRef> </Fragment> </Wix> I then run the following commands: candle -ext WixUtilExtension -ext WiXNetFxExtension DaisyFarmer.wxs Scheduler.wxs light -sice:ICE20 -ext WixUtilExtension -ext WiXNetFxExtension Scheduler.wixobj DaisyFarmer.wixobj -out DaisyFarmer.msi I'm getting an error when I run light.exe which says "DaisyFarmer.wxs(20) : error LGHT0094 : Unresolved reference to symbol 'Component:SchedulerComponent' in section 'Product:{BB7FBBE4-0A25-4CC7-A39C-AC916B665220}'." What am I missing?

    Read the article

  • Solaris 10 branded zone VM Templates for Solaris 11 on OTN

    - by jsavit
    Early this year I wrote the article Ours Goes To 11 which describes the ability to import Solaris 10 systems into a "Solaris 10 branded zone" under Oracle Solaris 11. I did this using Solaris 11 Express, and the capability remains in Solaris 11 with only slight changes. This important tool lets you painlessly inhaling a Solaris Container from Solaris 10 or entire Solaris 10 systems ("the global zone") into virtualized environments on a Solaris 11 OS. Just recently, Oracle provided Oracle VM Templates for Oracle Solaris 10 Zones to let you create Solaris 10 branded zones for Solaris 11 even if you don't currently have access to install media or a running Solaris 10 system. To use this, just download the Oracle VM Template for Oracle Solaris Zone 10 from OTN at http://www.oracle.com/technetwork/server-storage/solaris11/downloads/virtual-machines-1355605.html. This page contains images of Oracle Solaris 10 8/11 (the recent update to Solaris 10) in SPARC and x86 formats suitable for creating branded zones. The same page also has a VirtualBox image you can download for a complete Solaris 10 install in a guest virtual machine you can run on any host OS that supports VirtualBox. Both sets of downloads provide a quick - and extremely easy - way to set up a virtual Solaris 10 environment. In the case of the Oracle VM Templates, they illustrate several advanced features of Solaris 11. To start, just go to the above link, download the template for the hardware platform (SPARC or x86) you want, and download the README file also linked from that page. Install prerequisites The README file tells you to install the prerequisite Solaris 11 package that implements the Solaris 10 brand. Then you can install instances of zones with that brand. # pkg install pkg:/system/zones/brand/brand-solaris10 Packages to install: 1 Create boot environment: No Create backup boot environment: Yes DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 44/44 0.4/0.4 PHASE ACTIONS Install Phase 74/74 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 That took only a few minutes, and didn't require a reboot. Install the Solaris 10 zone Now it's time to run the downloaded template file. First make it executable via the chmod command, of course. I found that (unlike stated in the README) there was no need to rename the downloaded file to remove the .bin. When you run it you provide several parameters to describe the zone configuration: -a IP address - the IP address and optional netmask for the zone. This is the only mandatory parameter. -z zonename - the name of the zone you would like to create. -i interface - the package will create an exclusive-IP zone using a virtual NIC (vnic) based on this physical interface. In my case, I have a NIC called rge0. -p PATH - specifies the path in which you want the zoneroot to be placed. In my case, I have a ZFS dataset mounted at /zones, and this will create a zoneroot at /zones/s10u10. Kicking it off, you will see a copyright message, and then messages showing progress building the zone, which only takes a few minutes. # ./solaris-10u10-x86.bin -p /zones -a 192.168.1.100 -i rge0 -z s10u10 ... ... Checking disk-space for extraction Ok Extracting in /export/home/CDimages/s10zone/bootimage.ihaqvh ... 100% [===============================] Checking data integrity Ok Checking platform compatibility The host and the image do not have the same Solaris release: host Solaris release: 5.11 image Solaris release: 5.10 Will create a Solaris 10 branded zone. Warning: could not find a defaultrouter Zone won't have any defaultrouter configured IMAGE: ./solaris-10u10-x86.bin ZONE: s10u10 ZONEPATH: /zones/s10u10 INTERFACE: rge0 VNIC: vnicZBI13379 MAC ADDR: 2:8:20:5c:1a:cc IP ADDR: 192.168.1.100 NETMASK: 255.255.255.0 DEFROUTER: NONE TIMEZONE: US/Arizona Checking disk-space for installation Ok Installing in /zones/s10u10 ... 100% [===============================] Using a static exclusive-IP Attaching s10u10 Booting s10u10 Waiting for boot to complete booting... booting... booting... Zone s10u10 booted The zone's root password has been set using the root password of the local host. You can change the zone's root password to further harden the security of the zone: being root, log into the zone from the local host with the command 'zlogin s10u10'. Once logged in, change the root password with the command 'passwd'. The nifty part in my opinion (besides being so easy), is that the zone was created as an exclusive-IP zone on a virtual NIC. This network configuration lets you enforce traffic isolation from other zones, enforce network Quality of Service, and even let the zone set its own characteristics like IP address and packet size. Independence of the zone's network characteristics from the global zone is one of the enhancements in Solaris 10 that make it easier to consolidate zones while preserving their autonomy, yet provide control in a consolidated environment. Let's see what the virtual network environment looks like by issuing commands from the Solaris 11 global zone. First I'll use Old School ifconfig, and then I'll use the new ipadm and dladm commands. # ifconfig -a4 lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 rge0: flags=1004943<UP,BROADCAST,RUNNING,PROMISC,MULTICAST,DHCP,IPv4> mtu 1500 index 2 inet 192.168.1.3 netmask ffffff00 broadcast 192.168.1.255 ether 0:14:d1:18:ac:bc vboxnet0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 3 inet 192.168.56.1 netmask ffffff00 broadcast 192.168.56.255 ether 8:0:27:f8:62:1c # dladm show-phys LINK MEDIA STATE SPEED DUPLEX DEVICE yge0 Ethernet unknown 0 unknown yge0 yge1 Ethernet unknown 0 unknown yge1 rge0 Ethernet up 1000 full rge0 vboxnet0 Ethernet up 1000 full vboxnet0 # dladm show-link LINK CLASS MTU STATE OVER yge0 phys 1500 unknown -- yge1 phys 1500 unknown -- rge0 phys 1500 up -- vboxnet0 phys 1500 up -- vnicZBI13379 vnic 1500 up rge0 s10u10/vnicZBI13379 vnic 1500 up rge0 s10u10/net0 vnic 1500 up rge0 # dladm show-vnic LINK OVER SPEED MACADDRESS MACADDRTYPE VID vnicZBI13379 rge0 1000 2:8:20:5c:1a:cc random 0 s10u10/vnicZBI13379 rge0 1000 2:8:20:5c:1a:cc random 0 s10u10/net0 rge0 1000 2:8:20:9d:d0:79 random 0 # ipadm show-addr ADDROBJ TYPE STATE ADDR lo0/v4 static ok 127.0.0.1/8 rge0/_a dhcp ok 192.168.1.3/24 vboxnet0/_a static ok 192.168.56.1/24 lo0/v6 static ok ::1/128 Log into the zone The install step already booted the zone, so lets log into it. Notice how you have to be appropriately privileged to log into a zone. This is my home system so I'm being a bit cavalier, but in a production environment you can give granular control of who can login to which zones. Voila! a Solaris 10 environment under a Solaris 11 kernel. Notice the output from the uname -a and ifconfig commands, and output from a ping to a nearby host. $ zlogin s10u10 zlogin: You lack sufficient privilege to run this command (all privs required) savit@home:~$ sudo zlogin s10u10 Password: [Connected to zone 's10u10' pts/5] Oracle Corporation SunOS 5.10 Generic Patch January 2005 # uname -a SunOS s10u10 5.10 Generic_Virtual i86pc i386 i86pc # ifconfig -a4 lo0: flags=2001000849 mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 vnicZBI13379: flags=1000843 mtu 1500 index 2 inet 192.168.1.100 netmask ffffff00 broadcast 192.168.1.255 ether 2:8:20:5c:1a:cc # bash bash-3.2# ifconfig -a lo0: flags=2001000849 mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 vnicZBI13379: flags=1000843 mtu 1500 index 2 inet 192.168.1.100 netmask ffffff00 broadcast 192.168.1.255 ether 2:8:20:5c:1a:cc bash-3.2# ping 192.168.1.2 192.168.1.2 is alive For fun, I configured Apache (setting its configuration file in /etc/apache2) and brought it up. Easy - took just a few minutes. bash-3.2# svcs apache2 STATE STIME FMRI disabled 12:38:46 svc:/network/http:apache2 bash-3.2# svcadm enable apache2 Summary In just a few minutes, I built a functioning virtual Solaris 10 environment under by Solaris 11 system. It was... easy! While I can still do it the manual way (creating and using a system archive), this is a low-effort way to create a Solaris 10 zone on Solaris 11.

    Read the article

  • Snow Leopard sqlite3-ruby install problem

    - by JZ
    UPDATE 3/20/10 I'm running Mac OSX Snow Leopard, this problem is caused by a recent train wreck in which I updated ruby without RVM. I've attempted to properly install/run RVM, however I can't get it to work. I am unable to install the sqlite3-ruby gem. I get the following ERROR: Error installing sqlite3-ruby: ERROR: Failed to build gem native extension. How do I fix this? justin-zollarss-mac-pro:~ justinz$ rails -v Rails 2.3.5 justin-zollarss-mac-pro:~ justinz$ ruby -v ruby 1.8.7 (2008-08-11 patchlevel 72) [i686-darwin10.2.0] justin-zollarss-mac-pro:~ justinz$ gem -v 1.3.5 justin-zollarss-mac-pro:~ justinz$ which gem /usr/local/bin/gem justin-zollarss-mac-pro:~ justinz$ whereis gem /usr/bin/gem justin-zollarss-mac-pro:~ justinz$ which ruby /usr/local/bin/ruby justin-zollarss-mac-pro:~ justinz$ whereis ruby /usr/bin/ruby justin-zollarss-mac-pro:~ justinz$ which rails /usr/local/bin/rails justin-zollarss-mac-pro:~ justinz$ whereis rails /usr/bin/rails justin-zollarss-mac-pro:~ justinz$ gem list *** LOCAL GEMS *** actionmailer (2.3.5) actionpack (2.3.5) activerecord (2.3.5) activeresource (2.3.5) activesupport (2.3.5) builder (2.1.2) bundler (0.9.11) columnize (0.3.1) erubis (2.6.5) fastercsv (1.5.1) ffi (0.6.3) gbarcode (0.98.16) i18n (0.3.5) linecache (0.43) mail (2.1.3) memcache-client (1.8.0) prawn (0.8.4) prawn-core (0.8.4) prawn-layout (0.8.4) prawn-security (0.8.4) rack (1.1.0, 1.0.1) rack-mount (0.6.1) rack-test (0.5.3) rails (2.3.5) rake (0.8.7) ruby-debug (0.10.3) ruby-debug-base (0.10.3) rubygems-update (1.3.6) sqlite3 (0.0.8) text-format (1.0.0) thor (0.13.4) tzinfo (0.3.17) justin-zollarss-mac-pro:~ justinz$ sudo gem install sqlite3-ruby Password: Building native extensions. This could take a while... ERROR: Error installing sqlite3-ruby: ERROR: Failed to build gem native extension. /usr/local/bin/ruby extconf.rb checking for fdatasync() in -lrt... no checking for sqlite3.h... yes checking for sqlite3_open() in -lsqlite3... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/local/bin/ruby --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib --with-rtlib --without-rtlib --with-sqlite3lib --without-sqlite3lib Gem files will remain installed in /usr/local/lib/ruby/gems/1.8/gems/sqlite3-ruby-1.2.5 for inspection. Results logged to /usr/local/lib/ruby/gems/1.8/gems/sqlite3-ruby-1.2.5/ext/sqlite3_api/gem_make.out

    Read the article

  • android include tag - invalid layout refernce

    - by Dalibor Frivaldsky
    Hello, I'm having a problem including a different layout through the include tag in the android layout xml file. When specifing the layout reference ( @layout/... ), i'm getting a InflateException in the Eclipse ADT with the following error: InflateException: You must specifiy a valid layout reference. The layout ID @layout/func_edit_simple_calculator_toolbox is not valid. the reference should be valid, as I've selected it from the the list of my other layouts and didnt type it in. I'm using android sdk v2.1 these are the layout files func_edit_simple_calculator_toolbox.xml <?xml version="1.0" encoding="utf-8"?> <TableLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_height="wrap_content" android:layout_width="wrap_content"> <TableRow android:id="@+id/TableRow01" android:layout_width="wrap_content"android:layout_height="wrap_content"> <Button android:id="@+id/Button01" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="1"></Button> <Button android:id="@+id/Button02" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="2"></Button> <Button android:id="@+id/Button03" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="3"></Button> <Button android:id="@+id/Button04" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="+"></Button> </TableRow> <TableRow android:id="@+id/TableRow02" android:layout_width="wrap_content" android:layout_height="wrap_content"> <Button android:id="@+id/Button05" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="4"></Button> <Button android:id="@+id/Button06" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="5"></Button> <Button android:id="@+id/Button07" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="6"></Button> <Button android:id="@+id/Button08" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="-"></Button> </TableRow> </TableLayout> function_editor_layout.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" > <com.calculoid.FunctionView android:id="@+id/function_view" android:layout_width="fill_parent" android:layout_height="fill_parent"/> <include android:id="@+id/include01" android:layout_width="wrap_content" android:layout_height="wrap_content" layout="@layout/func_edit_simple_calculator_toolbox"></include> </LinearLayout> Does any one know what could be the problem? thanks in advance

    Read the article

  • Visual Studio 2013, ASP.NET MVC 5 Scaffolded Controls, and Bootstrap

    - by plitwin
    A few days ago, I created an ASP.NET MVC 5 project in the brand new Visual Studio 2013. I added some model classes and then proceeded to scaffold a controller class and views using the Entity Framework. Scaffolding Some Views Visual Studio 2013, by default, uses the Bootstrap 3 responsive CSS framework. Great; after all, we all want our web sites to be responsive and work well on mobile devices. Here’s an example of a scaffolded Create view as shown in Google Chrome browser   Looks pretty good. Okay, so let’s increase the width of the Title, Description, Address, and Date/Time textboxes. And decrease the width of the  State and MaxActors textbox controls. Can’t be that hard… Digging Into the Code Let’s take a look at the scaffolded Create.cshtml file. Here’s a snippet of code behind the Create view. Pretty simple stuff. @using (Html.BeginForm()) { @Html.AntiForgeryToken() <div class="form-horizontal"> <h4>RandomAct</h4> <hr /> @Html.ValidationSummary(true) <div class="form-group"> @Html.LabelFor(model => model.Title, new { @class = "control-label col-md-2" }) <div class="col-md-10"> @Html.EditorFor(model => model.Title) @Html.ValidationMessageFor(model => model.Title) </div> </div> <div class="form-group"> @Html.LabelFor(model => model.Description, new { @class = "control-label col-md-2" }) <div class="col-md-10"> @Html.EditorFor(model => model.Description) @Html.ValidationMessageFor(model => model.Description) </div> </div> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } A little more digging and I discover that there are three CSS files of importance in how the page is rendered: boostrap.css (and its minimized cohort) and site.css as shown below.   The Root of the Problem And here’s the root of the problem which you’ll find the following CSS in Site.css: /* Set width on the form input elements since they're 100% wide by default */ input, select, textarea { max-width: 280px; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Yes, Microsoft is for some reason setting the maximum width of all input, select, and textarea controls to 280 pixels. Not sure the motivation behind this, but until you change this or overrride this by assigning the form controls to some other CSS class, your controls will never be able to be wider than 280px. The Fix Okay, so here’s the deal: I hope to become very competent in all things Bootstrap in the near future, but I don’t think you should have to become a Bootstrap guru in order to modify some scaffolded control widths. And you don’t. Here is the solution I came up with: Find the aforementioned CSS code in SIte.css and change it to something more tenable. Such as: /* Set width on the form input elements since they're 100% wide by default */ input, select, textarea { max-width: 600px; } Because the @Html.EditorFor html helper doesn’t support the passing of HTML attributes, you will need to repalce any @Html.EditorFor() helpers with @Html.TextboxFor(), @Html.TextAreaFor, @Html.CheckBoxFor, etc. helpers, and then add a custom width attribute to each control you wish to modify. Thus, the earlier stretch of code might end up looking like this: @using (Html.BeginForm()) { @Html.AntiForgeryToken() <div class="form-horizontal"> <h4>Random Act</h4> <hr /> @Html.ValidationSummary(true) <div class="form-group"> @Html.LabelFor(model => model.Title, new { @class = "control-label col-md-2" }) <div class="col-md-10"> @Html.TextBoxFor(model => model.Title, new { style = "width: 400px" }) @Html.ValidationMessageFor(model => model.Title) </div> </div> <div class="form-group"> @Html.LabelFor(model => model.Description, new { @class = "control-label col-md-2" }) <div class="col-md-10"> @Html.TextAreaFor(model => model.Description, new { style = "width: 400px" }) @Html.ValidationMessageFor(model => model.Description) </div> </div> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Resulting Form Here’s what the page looks like after the fix: Technorati Tags: ASP.NET MVC,ASP.NET MVC 5,Bootstrap

    Read the article

  • iPhone App IDs and Provisioning... Does App ID get used instead of provisioning ID if I decide to us

    - by Jann
    This is a question that has been bugging me for a while. I started my app (now submitted -- not yet approved) not wishing to get into the mess that is APNS (Push). I did the following: iPhone Developer Center: Provisioning Portal-Provisioning: Then I created a Development and a Distribution Provisioning Profile. I installed both in XCode. Everything hunky dory. The Development profile scares me a bit by expiring so soon (90 days) but I can remove it from the iPhone(s) and sign it with a new one later. I tested using the Development profile, and later to submitted it by signing it with the Distribution profile. I then uploaded the Distribution profile-signed app to iTunesConnect (app store). Okay, I understand that much. Now, what I don't understand is this: Now that I understand the theories and methods behind how Push works, I am wishing to add it to my app. I already went under: iPhone Developer Center: Provisioning Portal-App IDs: and created a Development Provisioning Profile and Distribution Provisioning Profile there (push & in-app purchase enabled). Here is where it gets confusing to me. All the books and docs I have read say that I have to sign the app with this "App ID" provisioning profile (push-enabled) from now on. Does that mean I no longer ever use the previously created provisioning profiles? If I were to import these "App ID" provisioning profiles into Xcode they will exist alongside my previously generated "non-push" profiles. ~/Library/Mobile Devices/Provisioning Profiles now has 2 files. One Devel and one Distrib. It will now have 4 even though for this app I will not use the "non-push" anymore right? (actually, since they are locked by using bundle-codes and app ids i will never use it again if all of my further versions of this app use Push?) Confused. Can anyone enlighten me? Why not use the "App ID" profiles in the first place for everyone -- even if you are not gonna use push? Would keep it simpler. Should I only generate "Push Enabled" profiles from now on -- even if i am not sure I am gonna use push (or for that matter in-app purchase)? Please give me some insight. I do not wanna do this wrong. Thanks! Jann

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Asp.Net MVC EnableClientValidation doesnt work.

    - by Farrell
    I want as well as Client Side Validation as Server Side Validation. I realized this as the following: Model: ( The model has a DataModel(dbml) which contains the Test class ) namespace MyProject.TestProject { [MetadataType(typeof(TestMetaData))] public partial class Test { } public class TestMetaData { [Required(ErrorMessage="Please enter a name.")] [StringLength(50)] public string Name { get; set; } } } Controller is nothing special. The View: <% Html.EnableClientValidation(); %> <% using (Ajax.BeginForm("Index", "Test", FormMethod.Post, new AjaxOptions {}, new { enctype = "multipart/form-data" })) {%> <%= Html.AntiForgeryToken()%> <fieldset> <legend>Widget Omschrijving</legend> <div> <%= Html.LabelFor(Model => Model.Name) %> <%= Html.TextBoxFor(Model => Model.Name) %> <%= Html.ValidationMessageFor(Model => Model.Name) %> </div> </fieldset> <div> <input type="submit" value="Save" /> </div> <% } %> To make this all work I added also references to js files: <script src="../../Scripts/MicrosoftAjax.js" type="text/javascript"></script> <script src="../../Scripts/MicrosoftMvcAjax.js" type="text/javascript"></script> <script src="../../Scripts/MicrosoftMvcValidation.js" type="text/javascript"></script> <script src="../../Scripts/jquery-1.4.1.min.js" type="text/javascript"></script> Eventually it has to work, but it doesnt work 100%: It does validates with no page refresh after pressing the button. It also does "half" Client Side Validation. Only when you type some text into the textbox and then backspace the typed text. The Client Side Validation appears. But when I try this by tapping between controls there's no Client Side Validation. Do I miss some reference or something? (I use Asp.Net MVC 2 RTM)

    Read the article

  • Add animation when user control get visible and collapsed In Wpf

    - by sanjeev40084
    I have two xaml files MainWindow.xaml and other user control WorkDetail.xaml file. MainWindow.xaml file has a textbox, button, listbox and reference to WorkDetail.xaml(user control which is collapsed). Whenever user enter any text, it gets added in listbox when the add button is clicked. When any items from the listbox is double clicked, the visibility of WorkDetail.xaml is set to Visible and it gets displayed. In WorkDetail.xaml (user control) it has textblock and button. The Textblock displays the text of selected item and close button sets the visibility of WorkDetail window to collapsed. Now i am trying to animate WorkDetail.xaml when it gets visible and collapse. When any items from listbox is double clicked and WorkDetail.xaml visibility is set to visible, i want to create an animation of moving WorkDetail.xaml window from right to left on MainWindow. When Close button from WorkDetail.xaml file is clicked and WorkDetail.xaml file is collapsed, i want to slide the WorkDetail.xaml file from left to right from MainWindow. Here is the screenshot: MainWindow.xaml code: <Window...> <Grid Background="Black" > <TextBox x:Name="enteredWork" Height="39" Margin="44,48,49,0" TextWrapping="Wrap" VerticalAlignment="Top"/> <ListBox x:Name="workListBox" Margin="26,155,38,45" FontSize="29.333" MouseDoubleClick="workListBox_MouseDoubleClick"/> <Button x:Name="addWork" Content="Add" Height="34" Margin="71,103,120,0" VerticalAlignment="Top" Click="Button_Click"/> <TestWpf:WorkDetail x:Name="WorkDetail" Visibility="Collapsed"/> </Grid> </Window> MainWindow.xaml.cs class code: namespace TestWpf { public partial class MainWindow : Window { public MainWindow() { this.InitializeComponent(); } private void Button_Click(object sender, RoutedEventArgs e) { workListBox.Items.Add(enteredWork.Text); } private void workListBox_MouseDoubleClick(object sender, MouseButtonEventArgs e) { WorkDetail.workTextBlk.Text = (string)workListBox.SelectedItem; WorkDetail.Visibility = Visibility.Visible; } } } WorkDetail.xaml code: <UserControl ..> <Grid Background="#FFD2CFCF"> <TextBlock x:Name="workTextBlk" Height="154" Margin="33,50,49,0" TextWrapping="Wrap" VerticalAlignment="Top" FontSize="29.333" Background="#FFF13939"/> <Button x:Name="btnClose" Content="Close" Height="62" Margin="70,0,94,87" VerticalAlignment="Bottom" Click="btnClose_Click"/> </Grid> </UserControl> WorkDetail.xaml.cs class code: namespace TestWpf { public partial class WorkDetail : UserControl { public WorkDetail() { this.InitializeComponent(); } private void btnClose_Click(object sender, System.Windows.RoutedEventArgs e) { Visibility = Visibility.Collapsed; } } } Can anyone tell how can i do this?

    Read the article

  • .Net Intermittent System.Web.Services.Protocols.SoapHeaderException

    - by ScottE
    We have a .net 3.5 web app that consumes third party web services. The proxy was created by adding a web reference to their wsdl. This proxy is not compiled. Our error logging is picking up frequent but intermittent exceptions: An exception of type 'System.Web.Services.Protocols.SoapHeaderException' occurred and was caught If I follow the url to the page that generated the exception, I can't recreate it. Edit: Here is most of the exception - where it bubbled up from Message : Internal Error Type : System.Web.Services.Protocols.SoapHeaderException, System.Web.Services, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a Source : System.Web.Services Help link : Actor : Code : http://schemas.xmlsoap.org/soap/envelope/:Client Detail : Lang : Node : Role : SubCode : Data : System.Collections.ListDictionaryInternal TargetSite : System.Object[] ReadResponse(System.Web.Services.Protocols.SoapClientMessage, System.Net.WebResponse, System.IO.Stream, Boolean) Stack Trace : at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at Vendor.getSearch(getSearchRequest getSearchRequest) in c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\be43c34e\b09edc7e\App_WebReferences.pww-cf-q.0.cs:line 73 Edit 2: Inner exceptions: I sometimes get the following inner exceptions logged: Message : Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. Type : System.IO.IOException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source : System Help link : Data : System.Collections.ListDictionaryInternal TargetSite : Int32 Read(Byte[], Int32, Int32) Stack Trace : at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult) at System.Net.TlsStream.CallProcessAuthentication(Object state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Net.TlsStream.ProcessAuthentication(LazyAsyncResult result) at System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.ConnectStream.WriteHeaders(Boolean async) And/Or: Message : An existing connection was forcibly closed by the remote host Type : System.Net.Sockets.SocketException, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source : System Help link : ErrorCode : 10054 SocketErrorCode : ConnectionReset NativeErrorCode : 10054 Data : System.Collections.ListDictionaryInternal TargetSite : Int32 Receive(Byte[], Int32, Int32, System.Net.Sockets.SocketFlags) Stack Trace : at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) Update We're still working on it. Originally there was a route issue, which was resolved. We're still getting the inner exception with socket errors. We had MS support involved today, and they looked at some traces and network captures. The web service host does round-robin DNS, and they may be responding on a different IP address for the syn syn/ack from one ip, and the next from a different ip. This is not good. This is likely quite specific to our situation, but perhaps it applies to others as well. Microsoft Network Monitor and an application trace got us the information we needed.

    Read the article

  • How to use SQL file streaming win32 API and support WCF streaming

    - by Mahesh
    I'm using Sql server file stream type to store large files in the backend. I'm trying to use WCf to stream the file across to the clients. I'm able to get the handle to the file using SQLFileStream (API). I then try to return this stream. I have implemenetd data chunking on the client side to retrive the data from the stream. I'm able to do it for regular filestream and memory stream. Also if i convert then sqlfilestream in to memorystream that also works. The only think that doesn't work is when I try to return sqlfilestream. What am I doing wrong. I have tried both nettcpbinding with streaming enabled and http binding with MTOM encoding. This is the error message am getting : Socket connection was aborted. This could be caused by an error processing your mesage or a receive timeout being exceeded by the remote host, or an underlying network issue.. Local socket timneout was 00:09:59.... Here is my sample code RemoteFileInfo info = new RemoteFileInfo(); info.FileName = "SampleXMLFileService.xml"; string pathName = DataAccess.GetDataSnapshotPath("DataSnapshot1"); SqlConnection connection = DataAccess.GetConnection(); SqlTransaction sqlTransaction = connection.BeginTransaction("SQLSileStreamingTrans"); SqlCommand command = new SqlCommand(); command.Connection = connection; command.Transaction = sqlTransaction; command.CommandText = "SELECT GET_FILESTREAM_TRANSACTION_CONTEXT()"; byte[] transcationContext = command.ExecuteScalar() as byte[]; SqlFileStream stream = new SqlFileStream(pathName, transcationContext, FileAccess.Read); // byte[] bytes = new byte[stream.Length]; // stream.Read(bytes, 0, (int) stream.Length); // Stream reeturnStream = stream; // MemoryStream memoryStream = new MemoryStream(bytes); info.FileByteStream = stream; info.Length = info.FileByteStream.Length; connection.Close(); return info; [MessageContract] public class RemoteFileInfo : IDisposable { [MessageHeader(MustUnderstand = true)] public string FileName; [MessageHeader(MustUnderstand = true)] public long Length; [MessageBodyMember(Order = 1)] public System.IO.Stream FileByteStream; public void Dispose() { if (FileByteStream != null) { FileByteStream.Close(); FileByteStream = null; } } } ANy help is appreciated

    Read the article

  • Hey, Google: It’s Time to Add Multi-Window Multitasking To Android

    - by Chris Hoffman
    In 2012, Google’s Dianne Hackborn threatened to revoke CyanogenMod’s access to the Android Market if they moved forward with adding “Cornerstone” multitasking to their custom ROM. Samsung has since created their own multi-window multitasking feature. Dianne Hackborn said this “is something that needs to be done at the mainline platform level” so apps wouldn’t break. She was right — Android needs this as a standard feature and it’s time for Google to provide it. Doesn’t Android Have Multitasking? Android originally stood out from Apple’s iOS with its powerful multitasking. Applications can continue running in the background while you’re using another application. This makes Android powerful — you can even have BitTorrent clients downloading files in the background while using another app. Android still kept the design of a single app on screen at a time. This made a lot of sense when Android only ran on smartphones with small screens. Today, Android runs on everything from smaller smartphones all the way up to huge “phablets” like the Galaxy Note. Android has gone beyond phones and runs on 12-inch tablets, convertibles with keyboard docks, laptops, and even Android desktops. Android isn’t just a phone operating system. Samsung’s Multi-Window Isn’t Good Enough Samsung has tried to add value to Android by adding a multi-window feature. When you’re using a high-end phone like the Galaxy Note or Galaxy S, or a Galaxy tablet, you have the ability to run certain apps side-by-side with each other. There are big problems here. This only works on Samsung devices, and only on specific Samsung devices. To add support for this feature in a way that doesn’t break other apps, Samsung’s multi-window feature also only works with specific apps. You can’t just run any app in multi-window view, only the apps on the Multi Window bar Samsung provides. This prevents third-party apps from breaking, which is what Google was worried about with CyanogenMod’s Cornerstone feature. A feature that only works with a handful of apps on specific devices from a single manufacturer isn’t good enough. This feature needs to work on every Android device — or at least ones with suitably large screens and powerful enough internals. It needs to be an Android platform feature so application developers can ensure their apps will work properly with it on every device. Android developers shouldn’t have to add support for each manufacturer’s own multi-window feature if other manufacturers decide to copy Samsung. Floating Apps Are a Dirty Hack Floating apps also enable real multitasking. Remember that Android allows apps to run in the background while you’re using an app in the foreground. These apps can present interfaces that appear floating above the current app — think of it like using “always on top” to make a window always appear over every other app on a desktop operating system. You can install floating apps to browse the web, take notes, chat, and watch videos while using any app. Only apps specifically designed to run as floating apps will work, so you have to seek them out. Floating apps are also awkward to use because they float over the app you’re using, blocking parts of its interface. Microsoft added floating-window support to Skype for Android. You can have a video conversation and the other person’s face will always appear on your screen, even when you leave the Skype app. Microsoft is using more of Android’s multi-window multitasking power than Google is. Custom ROMs and Root-Only Tweaks Aren’t Acceptable Some custom ROMs are adding this feature to Android. Google threatened to revoke CyanogenMod’s access to the Android Market (now known as Google Play) if they added this feature because it could potentially break third-party apps. Today, other custom ROMs are working on split-screen multitasking. Samsung added their own version to their own devices. You can also get this feature by using a root-only Xposed Framework tweak known as XMultiWindow. If you have root access, you can get multi-window multitasking or any app on your device. This shouldn’t require rooting your device or installing a custom ROM. These third-party solutions often have awkward interfaces and bugs. We need an integrated, supported solution that works the same on every device. Why Multi-Window is Important Microsoft’s Windows 8.1 stands out among tablet operating systems for its powerful multitasking support, allowing you to view several apps side-by-side at the same time. Apple is also reported to be working on adding side-by-side apps to the iPad with iOS 8. On every competitor’s operating system, you’ll be able to view a web page while you write an email, watch a video while you browse the web, or chat with someone while you do anything else. But Android’s still remained frozen in time. Despite all Android’s underlying power — and despite the way Android allows apps to adapt to different screen sizes — Google is resisting adding this feature. Large-screen Android tablets like the Nexus 10 (remember that tablet Google hasn’t updated in over 18 months?) need this feature. So do huge phones, convertibles, laptops, and Android desktops. If tablets are the future of personal computing, we should be able to do more than one thing at a time on our tablets’ big screens. Microsoft, Samsung, and even Apple are realizing this — now it’s Google’s turn. Image Credit: Sergey Galyonkin on Flickr, Karlis Dambrans on Flickr

    Read the article

  • .NET 4.5 is an in-place replacement for .NET 4.0

    - by Rick Strahl
    With the betas for .NET 4.5 and Visual Studio 11 and Windows 8 shipping many people will be installing .NET 4.5 and hacking away on it. There are a number of great enhancements that are fairly transparent, but it's important to understand what .NET 4.5 actually is in terms of the CLR running on your machine. When .NET 4.5 is installed it effectively replaces .NET 4.0 on the machine. .NET 4.0 gets overwritten by a new version of .NET 4.5 which - according to Microsoft - is supposed to be 100% backwards compatible. While 100% backwards compatible sounds great, we all know that 100% is a hard number to hit, and even the aforementioned blog post at the Microsoft site acknowledges this. But there's so much more than backwards compatibility that makes this awkward at best and confusing at worst. What does ‘Replacement’ mean? When you install .NET 4.5 your .NET 4.0 assemblies in the \Windows\.NET Framework\V4.0.30319 are overwritten with a new set of assemblies. You end up with overwritten assemblies as well as a bunch of new ones (like the new System.Net.Http assemblies for example). The following screen shot demonstrates system.dll on my test machine (left) running .NET 4.5 on the right and my production laptop running stock .NET 4.0 (right):   Clearly they are different files with a difference in file sizes (interesting that the 4.5 version is actually smaller). That’s not all. If you actually query the runtime version when .NET 4.5 is installed with with Environment.Version you still get: 4.0.30319 If you open the properties of System.dll assembly in .NET 4.5 you'll also see: Notice that the file version is also left at 4.0.xxx. There are differences in build numbers: .NET 4.0 shows 261 and the current .NET 4.5 beta build is 17379. I suppose you can use assume a build number greater than 17000 is .NET 4.5, but that's pretty hokey to say the least. There’s no easy or obvious way to tell whether you are running on 4.0 or 4.5 – to the application they appear to be the same runtime version. And that is what Microsoft intends here. .NET 4.5 is intended as an in-place upgrade. Compile to 4.5 run on 4.0 – not quite! You can compile an application for .NET 4.5 and run it on the 4.0 runtime – that is until you hit a new feature that doesn’t exist on 4.0. At which point the app bombs at runtime. Say you write some code that is mostly .NET 4.0, but only has a few of the new features of .NET 4.5 like aync/await buried deep in the bowels of the application where it only fires occasionally. .NET will happily start your application and run everything 4.0 fine, until it hits that 4.5 code – and then crash unceremoniously at runtime. Oh joy! You can .NET 4.0 applications on .NET 4.5 of course and that should work without much fanfare. Different than .NET 3.0/3.5 Note that this in-place replacement is very different from the side by side installs of .NET 2.0 and 3.0/3.5 which all ran on the 2.0 version of the CLR. The two 3.x versions were basically library enhancements on top of the core .NET 2.0 runtime. Both versions ran under the .NET 2.0 runtime which wasn’t changed (other than for security patches and bug fixes) for the whole 3.x cycle. The 4.5 update instead completely replaces the .NET 4.0 runtime and leaves the actual version number set at v4.0.30319. When you build a new project with Visual Studio 2011, you can still target .NET 4.0 or you can target .NET 4.5. But you are in effect referencing the same set of assemblies for both regardless which version you use. What's different is the compiler used to compile and link your code so compiling with .NET 4.0 gives you just the subset of the functionality that is available in .NET 4.0, but when you use the 4.5 compiler you get the full functionality of what’s actually available in the assemblies and extra libraries. It doesn’t look like you will be able to use Visual Studio 2010 to develop .NET 4.5 applications. Good news – Bad news Microsoft is trying hard to experiment with every possible permutation of releasing new versions of the .NET framework apparently. No two updates have been the same. Clearly updating to a full new version of .NET (ie. .NET 2.0, 4.0 and at some point 5.0 runtimes) has its own set of challenges, but doing an in-place update of the runtime and then not even providing a good way to tell which version is installed is pretty whacky even by Microsoft’s standards. Especially given that .NET 4.5 includes a fairly significant update with all the aysnc functionality baked into the runtime. Most of the IO APIs have been updated to support task based async operation which significantly affects many existing APIs. To make things worse .NET 4.5 will be the initial version of .NET that ships with Windows 8 so it will be with us for a long time to come unless Microsoft finally decides to push .NET versions onto Windows machines as part of system upgrades (which currently doesn’t happen). This is the same story we had when Vista launched with .NET 3.0 which was a minor version that quickly was replaced by 3.5 which was more long lived and practical. People had enough problems dealing with the confusing versioning of the 3.x versions which ran on .NET 2.0. I can’t count the amount support calls and questions I’ve fielded because people couldn’t find a .NET 3.5 entry in the IIS version dialog. The same is likely to happen with .NET 4.5. It’s all well and good when we know that .NET 4.5 is an in-place replacement, but administrators and IT folks not intimately familiar with .NET are unlikely to understand this nuance and end up thoroughly confused which version is installed. It’s hard for me to see any upside to an in-place update and I haven’t really seen a good explanation of why this approach was decided on. Sure if the version stays the same existing assembly bindings don’t break so applications can stay running through an update. I suppose this is useful for some component vendors and strongly signed assemblies in corporate environments. But seriously, if you are going to throw .NET 4.5 into the mix, who won’t be recompiling all code and thoroughly test that code to work on .NET 4.5? A recompile requirement doesn’t seem that serious in light of a major version upgrade.  Resources http://blogs.msdn.com/b/dotnet/archive/2011/09/26/compatibility-of-net-framework-4-5.aspx http://www.devproconnections.com/article/net-framework/net-framework-45-versioning-faces-problems-141160© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Uploading multiple images through Tumblr API

    - by Joseph Carrington
    I have about 300 images I want to upload to my new Tumblr account, becuase my old wordpress site got hacked and I no longer wish to use wordpress. I uploaded one image a day for 300 days, and I'd like to be able to take these images and upload them to my tumblr site using the api. The images are currently local, stored in /images/. They all have the date they were uploaded as the first ten characters of the filename, (01-01-2009-filename.png) and I went to send this date parameter along as well. I want to be able to see the progress of the script by outputting the responses from the API to my error_log. Here is what I have so far, based on the tumblr api page. // Authorization info $tumblr_email = '[email protected]'; $tumblr_password = 'password'; // Tumblr script parameters $source_directory = "images/"; // For each file, assign the file to a pointer here's the first stumbling block. How do I get all of the images in the directory and loop through them? Once I have a for or while loop set up I assume this is the next step $post_data = fopen(dir(__FILE__) . $source_directory . $current_image, 'r'); $post_date = substr($current_image, 0, 10); // Data for new record $post_type = 'photo'; // Prepare POST request $request_data = http_build_query( array( 'email' => $tumblr_email, 'password' => $tumblr_password, 'type' => $post_type, 'data' => $post_data, 'date' => $post_date, 'generator' => 'Multi-file uploader' ) ); // Send the POST request (with cURL) $c = curl_init('http://www.tumblr.com/api/write'); curl_setopt($c, CURLOPT_POST, true); curl_setopt($c, CURLOPT_POSTFIELDS, $request_data); curl_setopt($c, CURLOPT_RETURNTRANSFER, true); $result = curl_exec($c); $status = curl_getinfo($c, CURLINFO_HTTP_CODE); curl_close($c); // Output response to error_log error_log($result); So, I'm stuck on how to use PHP to read a file directory, loop through each of the files, and do things to the name / with the file itself. I also need to know how to set the data parameter, as in choosing multi-part / formdata. I also don't know anything about cURL.

    Read the article

  • Django: What's an awesome plugin to maintain images in the admin?

    - by meder
    I have an articles entry model and I have an excerpt and description field. If a user wants to post an image then I have a separate ImageField which has the default standard file browser. I've tried using django-filebrowser but I don't like the fact that it requires django-grappelli nor do I necessarily want a flash upload utility - can anyone recommend a tool where I can manage image uploads, and basically replace the file browse provided by django with an imagepicking browser? In the future I'd probably want it to handle image resizing and specify default image sizes for certain article types. Edit: I'm trying out adminfiles now but I'm having issues installing it. I grabbed it and added it to my python path, added it to INSTALLED_APPS, created the databases for it, uploaded an image. I followed the instructions to modify my Model to specify adminfiles_fields and registered but it's not applying in my admin, here's my admin.py for articles: from django.contrib import admin from django import forms from articles.models import Category, Entry from tinymce.widgets import TinyMCE from adminfiles.admin import FilePickerAdmin class EntryForm( forms.ModelForm ): class Media: js = ['/media/tinymce/tiny_mce.js', '/media/tinymce/load.js']#, '/media/admin/filebrowser/js/TinyMCEAdmin.js'] class Meta: model = Entry class CategoryAdmin(admin.ModelAdmin): prepopulated_fields = { 'slug': ['title'] } class EntryAdmin( FilePickerAdmin ): adminfiles_fields = ('excerpt',) prepopulated_fields = { 'slug': ['title'] } form = EntryForm admin.site.register( Category, CategoryAdmin ) admin.site.register( Entry, EntryAdmin ) Here's my Entry model: class Entry( models.Model ): LIVE_STATUS = 1 DRAFT_STATUS = 2 HIDDEN_STATUS = 3 STATUS_CHOICES = ( ( LIVE_STATUS, 'Live' ), ( DRAFT_STATUS, 'Draft' ), ( HIDDEN_STATUS, 'Hidden' ), ) status = models.IntegerField( choices=STATUS_CHOICES, default=LIVE_STATUS ) tags = TagField() categories = models.ManyToManyField( Category ) title = models.CharField( max_length=250 ) excerpt = models.TextField( blank=True ) excerpt_html = models.TextField(editable=False, blank=True) body_html = models.TextField( editable=False, blank=True ) article_image = models.ImageField(blank=True, upload_to='upload') body = models.TextField() enable_comments = models.BooleanField(default=True) pub_date = models.DateTimeField(default=datetime.datetime.now) slug = models.SlugField(unique_for_date='pub_date') author = models.ForeignKey(User) featured = models.BooleanField(default=False) def save( self, force_insert=False, force_update= False): self.body_html = markdown(self.body) if self.excerpt: self.excerpt_html = markdown( self.excerpt ) super( Entry, self ).save( force_insert, force_update ) class Meta: ordering = ['-pub_date'] verbose_name_plural = "Entries" def __unicode__(self): return self.title Edit #2: To clarify I did move the media files to my media path and they are indeed rendering the image area, I can upload fine, the <<<image>>> tag is inserted into my editable MarkItUp w/ Markdown area but it isn't rendering in the MarkItUp preview - perhaps I just need to apply the |upload_tags into that preview. I'll try adding it to my template which posts the article as well.

    Read the article

  • Iterating through a directory with Ant

    - by Shaggy Frog
    Let's say I have a collection of PDF files with the following paths: /some/path/pdfs/birds/duck.pdf /some/path/pdfs/birds/goose.pdf /some/path/pdfs/insects/fly.pdf /some/path/pdfs/insects/mosquito.pdf What I'd like to do is generate thumbnails for each PDF that respect the relative path structure, and output to another location, i.e.: /another/path/thumbnails/birds/duck.png /another/path/thumbnails/birds/goose.png /another/path/thumbnails/insects/fly.png /another/path/thumbnails/insects/mosquito.png I'd like this to be done in Ant. Assume I'm going to use Ghostscript on the command line and I've already worked out the call to GS: <exec executable="${ghostscript.executable.name}"> <arg value="-q"/> <arg value="-r72"/> <arg value="-sDEVICE=png16m"/> <arg value="-sOutputFile=${thumbnail.image.path}"/> <arg value="${input.pdf.path}"/> </exec> So what I need to do is work out the correct values for ${thumbnail.image.path} and ${input.pdf.path} while traversing the PDF input directory. I have access to ant-contrib (just installed the "latest", which is 1.0b3) and I'm using Ant 1.8.0. I think I can make something work using the <for> task, <fileset>s and <mapper>s, but I am having trouble putting it all together. I tried something like: <for param="file"> <path> <fileset dir="${some.dir.path}/pdfs"> <include name="**/*.pdf"/> </fileset> </path> <sequential> <echo message="@{file}"/> </sequential> </for> But unfortunately the @{file} property is an absolute path, and I can't find any simple way of decomposing it into the relative components. If I can only do this using a custom task, I guess I could write one, but I'm hoping I can just plug together existing components.

    Read the article

  • Windows Azure Evolution &ndash; Welcome to VS2012

    - by Shaun
    When the Microsoft released the first preview version of Windows 8 and Visual Studio, many people in the community were asking if the windows azure tool is available to it. The answer was “NO”. Microsoft promised that the windows azure tool will only support the Visual Studio 2010 but when the 2012 was final released, windows azure tool should be work. But now alone with the new windows azure platform was published we got the latest Windows Azure SDK 1.7, which is compatible to the Visual Studio 2012 RC.   You can retrieve the latest version of the Windows Azure SDK through Web Platform Installer, which I think it’s the easiest and simplest way to download and install, since besides the SDK itself it also needs some other components. To download the latest windows azure SDK from Web Platform Installer, just go to the windows azure website and clicked the Develop, .NET and click the blue “install” button. Then you need to select which version of Visual Studio you want to use, Visual Studio 2010 or Visual Studio 2012 RC. After selected the current version you will download an EXE file. This file will lead you to install the Web Platform Installer 4.0 (if you haven’t installed) and the latest windows azure SDK. You can see the version name is June 2012, 1.7. Finally the WebPI will detect the dependent components you need to download and begin to install. But if you want to challenge yourself you can download the components and install them manually. The standalone installations are listed in this page with the instruction on how to install them with necessary pre-requirements.   Once you finished the installation you can open the Visual Studio 2012 RC and as usual, it need to be run as administrator. If you clicked the New Project link from the start page, navigated to Cloud category you will find that there no project template available. Is there anything wrong? So, if you changed the target framework from the default .NET 4.5 to .NET 4 you will see the azure project template. This is because, currently the windows azure instance does not support .NET 4.5. After clicked OK you will see the role creation window, which is similar as what you have seen before. But there are some new role templates in this SDK. Firstly you will have ASP.NET MVC 4 web role available, which means you can create ASP.NET MVC 4 applications for internet, intranet, mobile and WebAPI on the cloud. Then there are two new worker role templates, “Cache Worker Role” and “Worker Role with Service Bus Queue”. “Worker Role with Service Bus Queue” is a worker role which had added necessary references to access the Windows Azure Service Bus Queue. It also have some basic sample code in the worker role class which could read messages from the queue when started. The “Cache Worker Role” is a worker role which has the in-memory distributed cache feature enabled by default. This feature is different than the Windows Azure Caching. It allows the role instance to use its memory as a in-memory distributed cache clusters. By using this feature you can have one or more worker roles as some dedicate cache clusters. Alternatively, you can make part of your web role and worker role’s memory as the cache clusters as well. Let’s just create an ASP.NET MVC 4 Web Role, and click F5 to run it under the local emulator. If you have been working with azure for a while you should know that I need to setup the local storage emulator before running locally if it’s a fresh azure SDK installation. But in this version when we started our azure project the Visual Studio will check if the storage emulator had been initialized. If not, it will run the initializer automatically. And as you can see, in this version the storage emulator relies on the SQL Server 2012 Local DB feature. It will create the emulator database and tables in the default local database. You can set the storage emulator to use a standard SQL Server default instance by using the command “dsinit /instance:.”. The “dsinit” tool now is located at %PROGRAM FILES%\Microsoft SDKs\Windows Azure\Emulator\devstore After the Visual Studio complied and deployed the package our website should be shown in the browser. This is the MVC 4 Web Role home page on my Windows 8 machine in IE10. Another thing you might notice is that, in this version the compute emulator utilizes IIS Express to host the web roles instead of the full IIS. You can add breakpoint in the code and debug, and you can use the local storage emulator to test your code for accessing the storage service. All of them are same as what your are doing now on SDK 1.6. You can switch to use IIS to run your web role in local emulator. Just open the windows azure porject property windows, in the Web page select “Use IIS Web Server”. For more information about this please have a look on Nuno’s blog post. In the role property page in Visual Studio there’s no massive changes. You can configure your role settings such as the endpoints, certificates and local storage, etc.. One thing was added is the Caching tab. Here you can specify enable the caching feature or not, and how much memory you want to use as the cache cluster. I will introduce more details about it in the future posts. The publish and package feature are also no change. You can publish your project to azure directly through Visual Studio 2012, while you can create the package and upload manually. Below is the SDK version of my deployment which is 1.7.30602.1703 in the developer portal.   Summary In this post I introduced about the new Windows Azure SDK 1.7 especially on how it works on the latest Visual Studio 2012 RC. There’s no significant changes in the visual studio tool in this version but some small enhancement such as ASP.NET MVC 4, Cache Worker Role, using SQL 2012 Local DB and IIS Express, etc..   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

< Previous Page | 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572  | Next Page >