Search Results

Search found 11358 results on 455 pages for 'utf 16'.

Page 296/455 | < Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >

  • Fixing Corrupted Text

    - by oort
    I Have text which looks like this: supposed to undergo yearly cardiac exam in order to stay on transplant list. But, there are patients who are missing important cardiac information. It is yo ur job as an intern on call to make sure that you fin As you can see, the first line is fine, but then the second line is corrupt. It looks like this even when I open it using Vim or LibreOffice. Is there a way to fix this? I've tried changing the encoding to UTF-8 but to no avail. Thanks!

    Read the article

  • The maximum message size quota for incoming messages (65536) has been exceeded.

    - by DaleyKD
    My WCF Service has an OperationContract that accepts, as a parameter, an array of objects. This can potentially be quite large. After looking for fixes for Bad Request: 400, I found the real reason: the maximum message size. I know this question has been asked before in MANY places. I've tried what everyone says: "Increase the sizes in the client and server config files." I have. It still doesn't work. My Service's web.config: <system.serviceModel> <services> <service name="myService"> <endpoint name="myEndpoint" address="" binding="basicHttpBinding" bindingConfiguration="myBinding" contract="Meisel.WCF.PDFDocs.IPDFDocsService" /> </service> </services> <bindings> <basicHttpBinding> <binding name="myBinding" closeTimeout="00:11:00" openTimeout="00:11:00" receiveTimeout="00:15:00" sendTimeout="00:15:00" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647" maxBufferPoolSize="2147483647" transferMode="Buffered" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <security mode="None" /> </binding> </basicHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> <dataContractSerializer maxItemsInObjectGraph="2147483647" /> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> My Client's app.config: <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_IPDFDocsService" closeTimeout="00:11:00" openTimeout="00:11:00" receiveTimeout="00:10:00" sendTimeout="00:11:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="2147483647" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <security mode="None"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="http://localhost:8451/PDFDocsService.svc" behaviorConfiguration="MoreItemsInObjectGraph" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_IPDFDocsService" contract="PDFDocsService.IPDFDocsService" name="BasicHttpBinding_IPDFDocsService" /> </client> <behaviors> <endpointBehaviors> <behavior name="MoreItemsInObjectGraph"> <dataContractSerializer maxItemsInObjectGraph="2147483647" /> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> What can I possibly be missing or doing wrong? It's as though the service is ignoring what I typed in the maxReceivedBufferSize. Thanks in advance, Kyle UPDATE Here are two other StackOverflow questions where they never received an answer, either: http://stackoverflow.com/questions/2880623/maxreceivedmessagesize-adjusted-but-still-getting-the-quotaexceedexception-with http://stackoverflow.com/questions/2569715/wcf-maxreceivedmessagesize-property-not-taking

    Read the article

  • How to Determine the Size of MSADO Command Parameters

    - by Adam
    I am new to MS ADO and trying to understand how to set the size on command parameters as created by the command.CreateParameter (Name, Type, Direction, Size, Value) The documentation says the following: Size Optional. A Long value that specifies the maximum length for the parameter value in characters or bytes. ... If you specify a variable-length data type in the Type argument, you must either pass a Size argument or set the Size property of the Parameter object before appending it to the Parameters collection; otherwise, an error occurs. 1.) What should one pass for fixed-size parameters? Is it a "don't care"? I was a bit confused by the example found here, in which they set size to 3 for an adInteger parameter with Value set to a variant of type VT_I2 pPrmByRoyalty->Type = adInteger; pPrmByRoyalty->Size = 3; pPrmByRoyalty->Direction = adParamInput; pPrmByRoyalty->Value = vtroyal; VT_I2 implies two bytes. A tagVARIANT struct is 16 bytes. How did they land on three? I see that the enum value for adInteger happens to be three, but I suspect that is just a coincidence. So it's a bit confusing what to pass for fixed-size parameters. The team I'm working with has always passed sizeof(int) for adInteger, and it seems to work. Is that correct? Now, for "variable-length" parameters: we are instructed by the documentation to pass "the maximum length .. in characters or bytes". 2.) For adVarChar, is it sufficient to pass the max width as defined in the database? 3.) What about the Wide types (e.g. adVarWChar)? Is it characters or bytes? 4.) How about adVariant, which could contain fixed- or variable-length data? 5.) Do arrays ever come into play here? (we don't pass them as parameters, just curious) Any references or personal insights are welcome.

    Read the article

  • Can't install thin by using rubygems on Ubuntu 9.10

    - by skyfive
    How can I fix this error, and install thin or other gems? $ sudo gem install thin Building native extensions. This could take a while... ERROR: Error installing thin: ERROR: Failed to build gem native extension. /usr/bin/ruby1.9.1 extconf.rb checking for rb_trap_immediate in ruby.h,rubysig.h... *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.9.1 /usr/lib/ruby/1.9.1/mkmf.rb:362:in `try_do': The complier failed to generate an executable file. (RuntimeError) You have to install development tools first. from /usr/lib/ruby/1.9.1/mkmf.rb:425:in `try_compile' from /usr/lib/ruby/1.9.1/mkmf.rb:543:in `try_var' from /usr/lib/ruby/1.9.1/mkmf.rb:791:in `block in have_var' from /usr/lib/ruby/1.9.1/mkmf.rb:668:in `block in checking_for' from /usr/lib/ruby/1.9.1/mkmf.rb:274:in `block (2 levels) in postpone' from /usr/lib/ruby/1.9.1/mkmf.rb:248:in `open' from /usr/lib/ruby/1.9.1/mkmf.rb:274:in `block in postpone' from /usr/lib/ruby/1.9.1/mkmf.rb:248:in `open' from /usr/lib/ruby/1.9.1/mkmf.rb:270:in `postpone' from /usr/lib/ruby/1.9.1/mkmf.rb:667:in `checking_for' from /usr/lib/ruby/1.9.1/mkmf.rb:790:in `have_var' from extconf.rb:16:in `' Gem files will remain installed in /var/lib/gems/1.9.1/gems/eventmachine-0.12.10 for inspection. Results logged to /var/lib/gems/1.9.1/gems/eventmachine-0.12.10/ext/gem_make.out Addtional Infomation as below $ cat /etc/issue Ubuntu 9.10 \n \l $ dpkg -l | grep ruby ii libreadline-ruby1.9.1 1.9.1.243-2 Readline interface for Ruby 1.9.1 ii libruby1.9.1 1.9.1.243-2 Libraries necessary to run Ruby 1.9.1 ii ruby1.9.1 1.9.1.243-2 Interpreter of object-oriented scripting lan ii ruby1.9.1-dev 1.9.1.243-2 Header files for compiling extension modules ii rubygems1.9.1 1.3.5-1ubuntu2 package management framework for Ruby librar $ ruby -v ruby 1.9.1p243 (2009-07-16 revision 24175) [x86_64-linux] $ gem list *** LOCAL GEMS *** rack (1.1.0) sinatra (1.0)

    Read the article

  • Validate a string in a table in SQL Server - CLR function or T-SQL (Question updated)

    - by Ashish Gupta
    I need to check If a column value (string) in SQL server table starts with a small letter and can only contain '_', '-', numbers and alphabets. I know I can use a SQL server CLR function for that. However, I am trying to implement that validation using a scalar UDF and could make very little here...I can use 'NOT LIKE', but I am not sure how to make sure I validate the string irrespective of the order of characters or in other words write a pattern in SQL for this. Am I better off using a SQL CLR function? Any help will be appreciated.. Thanks in advance Thank you everyone for their comments. This morning, I chose to go CLR function way. For the purpose of what I was trying to achieve, I created one CLR function which does the validation of an input string and have that called from a SQL UDF and It works well. Just to measure the performance of t-SQL UDF using SQL CLR function vs t- SQL UDF, I created a SQL CLR function which will just check if the input string contains only small letters, it should return true else false and have that called from a UDF (IsLowerCaseCLR). After that I also created a regular t-SQL UDF(IsLowerCaseTSQL) which does the same thing using the 'NOT LIKE'. Then I created a table (Person) with columns Name(varchar) and IsValid(bit) columns and populate that with names to test. Data :- 1000 records with 'Ashish' as value for Name column 1000 records with 'ashish' as value for Name column then I ran the following :- UPDATE Person Set IsValid=1 WHERE dbo.IsLowerCaseTSQL (Name) Above updated 1000 records (with Isvalid=1) and took less than a second. I deleted all the data in the table and repopulated the same with same data. Then updated the same table using Sql CLR UDF (with Isvalid=1) and this took 3 seconds! If update happens for 5000 records, regular UDF takes 0 seconds compared to CLR UDF which takes 16 seconds! I am very less knowledgeable on t-SQL regular expression or I could have tested my actual more complex validation criteria. But I just wanted to know, even I could have written that, would that have been faster than the SQL CLR function considering the example above. Are we using SQL CLR because we can implement we can implement lot richer logic which would have been difficult otherwise If we write in regular SQL. Sorry for this long post. I just want to know from the experts. Please feel free to ask if you could not understand anything here. Thank you again for your time.

    Read the article

  • Flex/Flash 4 datagrid displays raw xml

    - by Setori
    Problem: Flex/Flash4 client (built with FlashBuilder4) displays the xml sent from the server exactly as is - the datagrid keeps the format of the xml. I need the datagrid to parse the input and place the data in the correct rows and columns of the datagrid. flow: click on a date in the tree and it makes a server request for batch information in xml form. Using a CallResponder I then update the datagrid's dataProvider. [code] <fx:Script> <![CDATA[ import mx.controls.Alert; [Bindable]public var selectedTreeNode:XML; public function taskTreeChanged(event:Event):void { selectedTreeNode=Tree(event.target).selectedItem as XML; var searchHubId:String = selectedTreeNode.@hub; var searchDate:String = selectedTreeNode.@lbl; if((searchHubId == "") || (searchDate == "")){ return; } findShipmentBatches(searchDate,searchHubId); } protected function findShipmentBatches(searchDate:String, searchHubId:String):void{ findShipmentBatchesResult.token = actWs.findShipmentBatches(searchDate, searchHubId); } protected function updateBatchDataGridDP():void{ task_list_dg.dataProvider = findShipmentBatchesResult.lastResult; } ]]> </fx:Script> <fx:Declarations> <actws:ActWs id="actWs" fault="Alert.show(event.fault.faultString + '\n' + event.fault.faultDetail)" showBusyCursor="true"/> <s:CallResponder id="findShipmentBatchesResult" result="updateBatchDataGridDP()"/> </fx:Declarations> <mx:AdvancedDataGrid id="task_list_dg" width="100%" height="95%" paddingLeft="0" paddingTop="0" paddingBottom="0"> <mx:columns> <mx:AdvancedDataGridColumn headerText="Receiving date" dataField="rd"/> <mx:AdvancedDataGridColumn headerText="Msg type" dataField="mt"/> <mx:AdvancedDataGridColumn headerText="SSD" dataField="ssd"/> <mx:AdvancedDataGridColumn headerText="Shipping site" dataField="sss"/> <mx:AdvancedDataGridColumn headerText="File name" dataField="fn"/> <mx:AdvancedDataGridColumn headerText="Batch number" dataField="bn"/> </mx:columns> </mx:AdvancedDataGrid> //xml example from server <batches> <batch> <rd>2010-04-23 16:31:00.0</rd> <mt>SC1REVISION01</mt> <ssd>2010-02-18 00:00:00.0</ssd> <sss>100000009</sss> <fn>Revision 1-DF-Ocean-SC1SUM-Quanta-PACT-EMEA-Scheduled Ship Date 20100218.csv</fn> <bn>10041</bn> </batch> <batches> [/code] and the xml is pretty much displayed exactly as is shown in the example above in the datagrid columns... I would appreciate your assistance.

    Read the article

  • Mixing policy-based design with CRTP in C++

    - by Eitan
    I'm attempting to write a policy-based host class (i.e., a class that inherits from its template class), with a twist, where the policy class is also templated by the host class, so that it can access its types. One example where this might be useful is where a policy (used like a mixin, really), augments the host class with a polymorphic clone() method. Here's a minimal example of what I'm trying to do: template <template <class> class P> struct Host : public P<Host<P> > { typedef P<Host<P> > Base; typedef Host* HostPtr; Host(const Base& p) : Base(p) {} }; template <class H> struct Policy { typedef typename H::HostPtr Hptr; Hptr clone() const { return Hptr(new H((Hptr)this)); } }; Policy<Host<Policy> > p; Host<Policy> h(p); int main() { return 0; } This, unfortunately, fails to compile, in what seems to me like circular type dependency: try.cpp: In instantiation of ‘Host<Policy>’: try.cpp:10: instantiated from ‘Policy<Host<Policy> >’ try.cpp:16: instantiated from here try.cpp:2: error: invalid use of incomplete type ‘struct Policy<Host<Policy> >’ try.cpp:9: error: declaration of ‘struct Policy<Host<Policy> >’ try.cpp: In constructor ‘Host<P>::Host(const P<Host<P> >&) [with P = Policy]’: try.cpp:17: instantiated from here try.cpp:5: error: type ‘Policy<Host<Policy> >’ is not a direct base of ‘Host<Policy>’ If anyone can spot an obvious mistake, or has successfuly mixing CRTP in policies, I would appreciate any help.

    Read the article

  • (Apache) Weird characters with Roundcube (PHP)

    - by thonixx
    Yes, i saw all the questions about the weird characters at the end of a PHP script. I will ask here because no solution from the internet and serverfault worked. At this page: https://webmail.pixelwolf.ch/test/ there are some mysterious characters. And that's the problem why my Roundcube does not work. What I already checked and tried: 1. added AddDefaultCharset UTF-8 2. changed to AddDefaultCharset to ISO xxx (dont know the string right now) 3. php5filter disabled 4. gzip checked (according to php returns junk characters at end of everything) but characters remain there For notice: on my local server there aren't any of those characters. On local it just works. So what can I check further?

    Read the article

  • .NET XML Serialization without <?xml> root node

    - by Graphain
    Hi, I'm trying to generate XML like this: <?xml version="1.0"?> <!DOCTYPE APIRequest SYSTEM "https://url"> <APIRequest> <Head> <Key>123</Key> </Head> <ObjectClass> <Field>Value</Field </ObjectClass> </APIRequest> I have a class (ObjectClass) decorated with XMLSerialization attributes like this: [XmlRoot("ObjectClass")] public class ObjectClass { [XmlElement("Field")] public string Field { get; set; } } And my really hacky intuitive thought to just get this working is to do this when I serialize: ObjectClass inst = new ObjectClass(); XmlSerializer serializer = new XmlSerializer(inst.GetType(), ""); StringWriter w = new StringWriter(); w.WriteLine(@"<?xml version=""1.0""?>"); w.WriteLine("<!DOCTYPE APIRequest SYSTEM"); w.WriteLine(@"""https://url"">"); w.WriteLine("<APIRequest>"); w.WriteLine("<Head>"); w.WriteLine(@"<Field>Value</Field>"); w.WriteLine(@"</Head>"); XmlSerializerNamespaces ns = new XmlSerializerNamespaces(); ns.Add("", ""); serializer.Serialize(w, inst, ns); w.WriteLine("</APIRequest>"); However, this generates XML like this: <?xml version="1.0"?> <!DOCTYPE APIRequest SYSTEM "https://url"> <APIRequest> <Head> <Key>123</Key> </Head> <?xml version="1.0" encoding="utf-16"?> <ObjectClass> <Field>Value</Field> </ObjectClass> </APIRequest> i.e. the serialize statement is automatically adding a <?xml root element. I know I'm attacking this wrong so can someone point me in the right direction? As a note, I don't think it will make practical sense to just make an APIRequest class with an ObjectClass in it (because there are say 20 different types of ObjectClass that each needs this boilerplate around them) but correct me if I'm wrong.

    Read the article

  • Ruby Rails Mongrel Sever failing to serve OXS1.6

    - by Mark V
    Hi there I'm fairly new to Rails and the Mac, and doing my first deploy... I'm trying to set up my rails app on a brand new Apple mini-server running OXS1.6 (Snow Leopard). It is currently running fine on my new iMac i7 (same OS). I start mongrel with this command: mongrel_rails start -e production -p 3000 -d -a 127.0.0.1 --debug And it starts giving this output in the log/mongrel.log ** Daemonized, any open files are closed. Look at log/mongrel.pid and log/mongrel.log for info. ** Starting Mongrel listening at 127.0.0.1:3000 ** Installing debugging prefixed filters. Look in log/mongrel_debug for the files. ** Starting Rails with production environment... /Library/Ruby/Gems/1.8/gems/rails-2.3.5/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement /Users/danadmin/ServiceApp/ServiceApp/app/helpers/input_grid_manager.rb:9: warning: already initialized constant ID_PREFIX /Users/danadmin/ServiceApp/ServiceApp/app/helpers/input_grid_manager.rb:10: warning: already initialized constant ADD_ID ** Rails loaded. ** Loading any Rails specific GemPlugins ** Signals ready. TERM => stop. USR2 => restart. INT => stop (no restart). ** Rails signals registered. HUP => reload (without restart). It might not work well. ** Mongrel 1.1.5 available at 127.0.0.1:3000 ** Writing PID file to log/mongrel.pid The output is the same on my dev iMac (including the warnings). The difference is that accessing http://127.0.0.1:3000 on my iMac serves up the app's login page. Where as on the mac mini-server accessing the same results in this error 500 text from mongrel: "We're sorry, but something went wrong." It's as if rails is not working. I'm pretty good at figuring things out if I have some log file messages to direct me, but mongrel.log has no error message (the output remains the same as above), and the log/production.log is empty (which makes me think rails has not started?). My gems are all the same versions between machines and so is the app code; and there are no clues I can see in any of the mongrel_debug logs, except that rails.log on the mac mini-server and the iMac are different. After a start and single access, first is the rails.log from the mac mini-server: D, [2010-04-15T13:45:34.870406 #6914] DEBUG -- : TRACING ON Thu Apr 15 13:45:34 +1200 2010 Thu Apr 15 13:46:08 +1200 2010 REQUEST / --- !map:Mongrel::HttpParams SERVER_NAME: 127.0.0.1 HTTP_ACCEPT: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 HTTP_CACHE_CONTROL: max-age=0 HTTP_HOST: 127.0.0.1:3000 HTTP_USER_AGENT: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_0; en-US) AppleWebKit/533.2 (KHTML, like Gecko) Chrome/5.0.342.9 Safari/533.2 REQUEST_PATH: / SERVER_PROTOCOL: HTTP/1.1 HTTP_ACCEPT_LANGUAGE: en-US,en;q=0.8 REMOTE_ADDR: 127.0.0.1 PATH_INFO: / SERVER_SOFTWARE: Mongrel 1.1.5 SCRIPT_NAME: / HTTP_VERSION: HTTP/1.1 REQUEST_URI: / SERVER_PORT: "3000" HTTP_ACCEPT_CHARSET: ISO-8859-1,utf-8;q=0.7,*;q=0.3 REQUEST_METHOD: GET GATEWAY_INTERFACE: CGI/1.2 HTTP_ACCEPT_ENCODING: gzip,deflate,sdch HTTP_CONNECTION: keep-alive While on my iMac it seems the same except for the addition of the HTTP_COOKIE and the HTTP_IF_NONE_MATCH, here is rails.log from my iMac # Logfile created on Thu Apr 15 13:41:42 +1200 2010 by logger.rb/22285 D, [2010-04-15T13:41:42.934088 #2070] DEBUG -- : TRACING ON Thu Apr 15 13:41:42 +1200 2010 Thu Apr 15 13:42:05 +1200 2010 REQUEST / --- !map:Mongrel::HttpParams SERVER_NAME: 127.0.0.1 HTTP_ACCEPT: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 HTTP_HOST: 127.0.0.1:3000 HTTP_USER_AGENT: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_3; en-US) AppleWebKit/533.2 (KHTML, like Gecko) Chrome/5.0.342.9 Safari/533.2 REQUEST_PATH: / SERVER_PROTOCOL: HTTP/1.1 HTTP_IF_NONE_MATCH: "\"216cc63ce3c1f286ef8dd4f18f354f6e\"" HTTP_ACCEPT_LANGUAGE: en-US,en;q=0.8 REMOTE_ADDR: 127.0.0.1 PATH_INFO: / SERVER_SOFTWARE: Mongrel 1.1.5 SCRIPT_NAME: / HTTP_COOKIE: _ServiceApp_session=BAh7DDonY3VzdG9tZXJfbGlzdF9maWx0ZXJfam9iX3N0YXR1c19pZGn6Og9zZXNzaW9uX2lkIiU0ZTk1ZWZjMmViMGU3NjE2YzA0NDc2YTkxYzJlNDZiOToaY3VycmVudF9jdXN0b21lcl9uYW1lIilUSEUgQ1VTVE9NRVIgTkFNRSBORUVEUyBUTyBCRSBMT0FERUQ6EF9jc3JmX3Rva2VuIjFuT1JMUWk0NlZrWlM3c2lUN3BaWCs5NkhRajhxYnFwRnhzVHVTWXEvUWY0PToZam9iX2xpc3RfZmlsdGVyX3RleHQiADogam9iX2xpc3RfZmlsdGVyX2VtcGxveWVlX2lkafo6HmN1c3RvbWVyX2xpc3RfZmlsdGVyX3RleHQiAA%3D%3D--d01bc5d0b457ad524d16cb3402b5dfed9afce83d HTTP_VERSION: HTTP/1.1 REQUEST_URI: / SERVER_PORT: "3000" HTTP_ACCEPT_CHARSET: ISO-8859-1,utf-8;q=0.7,*;q=0.3 REQUEST_METHOD: GET GATEWAY_INTERFACE: CGI/1.2 HTTP_ACCEPT_ENCODING: gzip,deflate,sdch HTTP_CONNECTION: keep-alive Any direction or ideas would be greatly appreciated. Thanks.

    Read the article

  • What'd be a good pattern on Doctrine to have multiple languages

    - by PERR0_HUNTER
    Hi! I have this challenge which consist in having a system that offers it's content in multiple languages, however a part of the data contained in the system is not translatable such as dates, ints and such. I mean if I have a content on the following YAML Corporativos: columns: nombre: type: string(254) notnull: true telefonos: type: string(500) email: type: string(254) webpage: type: string(254) CorporativosLang: columns: corporativo_id: type: integer(8) notnull: true lang: type: string(16) fixed: false ubicacion: type: string() fixed: false unsigned: false primary: false notnull: true autoincrement: false contacto: type: string() fixed: false unsigned: false primary: false notnull: true autoincrement: false tipo_de_hoteles: type: string(254) fixed: false unsigned: false primary: false notnull: true autoincrement: false paises: type: string() fixed: false unsigned: false primary: false notnull: true autoincrement: false relations: Corporativo: class: Corporativos local: corporativo_id foreign: id type: one foreignAlias: Data This would allow me to have different corporative offices, however the place of the corp, the contact and other things can be translate into a different language (lang) Now this code here would create me a brand new corporative office with 2 translations $corporativo = new Corporativos(); $corporativo->nombre = 'Duck Corp'; $corporativo->telefonos = '66303713333'; $corporativo->email = '[email protected]'; $corporativo->webpage = 'http://quack.com'; $corporativo->Data[0]->lang = 'en'; $corporativo->Data[0]->ubicacion = 'zomg'; $corporativo->Data[1]->lang = 'es'; $corporativo->Data[1]->ubicacion = 'zomg amigou'; the thing now is I don't know how to retrieve this data in a more friendly way, because if I'd like to access my Corporative info in english I'd had to run DQL for the corp and then another DQL for the specific translation in english, What I'd love to do is have my translatable fields available in the root so I could simply access them $corporativo = new Corporativos(); $corporativo->nombre = 'Duck Corp'; $corporativo->telefonos = '66303713333'; $corporativo->email = '[email protected]'; $corporativo->webpage = 'http://quack.com'; $corporativo->lang = 'en'; $corporativo->ubicacion = 'zomg'; this way the translatable fields would be mapped to the second table automatically. I hope I can explain my self clear :( any suggestions ?

    Read the article

  • JFace ApplicationWindow: createContents isn't working

    - by jasonh
    I'm attempting to create a window that is divided into three parts. A non-resizable header and footer and then a content area that expands to fill the remaining area in the window. To get started, I created the following class: public class MyWindow extends ApplicationWindow { Color white; Font mainFont; Font headerFont; public MyWindow() { super(null); } protected Control createContents(Composite parent) { Display currentDisplay = Display.getCurrent(); white = new Color(currentDisplay, 255, 255, 255); mainFont = new Font(currentDisplay, "Tahoma", 8, 0); headerFont = new Font(currentDisplay, "Tahoma", 16, 0); // Main layout Composites and overall FillLayout Composite container = new Composite(parent, SWT.NO_RADIO_GROUP); Composite header = new Composite(container, SWT.NO_RADIO_GROUP); Composite mainContents = new Composite(container, SWT.NO_RADIO_GROUP);; Composite footer = new Composite(container, SWT.NO_RADIO_GROUP);; FillLayout containerLayout = new FillLayout(SWT.VERTICAL); container.setLayout(containerLayout); // Header Label headerLabel = new Label(header, SWT.LEFT); headerLabel.setText("Header"); headerLabel.setFont(headerFont); // Main contents Label contentsLabel = new Label(mainContents, SWT.CENTER); contentsLabel.setText("Main Content Here"); contentsLabel.setFont(mainFont); // Footer Label footerLabel = new Label(footer, SWT.CENTER); footerLabel.setText("Footer Here"); footerLabel.setFont(mainFont); return container; } public void dispose() { cleanUp(); } @Override protected void finalize() throws Throwable { cleanUp(); super.finalize(); } private void cleanUp() { if (headerFont != null) { headerFont.dispose(); } if (mainFont != null) { mainFont.dispose(); } if (white != null) { white.dispose(); } } } And this results in an empty window when I run it like this: public static void main(String[] args) { MyWindow myWindow = new MyWindow(); myWindow.setBlockOnOpen(true); myWindow.open(); Display.getCurrent().dispose(); } What am I doing wrong that I don't see three labels the way I'm trying to display them? The createContents code is definitely being called, I can step through it in Eclipse in debug mode.

    Read the article

  • Getting pixel data from an image using java.

    - by Matt
    I'm trying to get the pixel rgb values from a 64 x 48 bit image. I get some values but nowhere near the 3072 (= 64 x 48) values that I'm expecting. I also get: Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: Coordinate out of bounds! at sun.awt.image.ByteInterleavedRaster.getDataElements(ByteInterleavedRaster.java:301) at java.awt.image.BufferedImage.getRGB(BufferedImage.java:871) at imagetesting.Main.getPixelData(Main.java:45) at imagetesting.Main.main(Main.java:27) I can't find the out of bounds error... Here's the code: package imagetesting; import java.io.IOException; import javax.imageio.ImageIO; import java.io.File; import java.awt.image.BufferedImage; public class Main { public static final String IMG = "matty.jpg"; public static void main(String[] args) { BufferedImage img; try { img = ImageIO.read(new File(IMG)); int[][] pixelData = new int[img.getHeight() * img.getWidth()][3]; int[] rgb; int counter = 0; for(int i = 0; i < img.getHeight(); i++){ for(int j = 0; j < img.getWidth(); j++){ rgb = getPixelData(img, i, j); for(int k = 0; k < rgb.length; k++){ pixelData[counter][k] = rgb[k]; } counter++; } } } catch (IOException e) { e.printStackTrace(); } } private static int[] getPixelData(BufferedImage img, int x, int y) { int argb = img.getRGB(x, y); int rgb[] = new int[] { (argb >> 16) & 0xff, //red (argb >> 8) & 0xff, //green (argb ) & 0xff //blue }; System.out.println("rgb: " + rgb[0] + " " + rgb[1] + " " + rgb[2]); return rgb; } }

    Read the article

  • How to Symbolicate iPhone App Crash Reports ?

    - by bluej3
    Hello~ I retrieved the crash reports from iTunes Connect. I referenced this site. http://webcache.googleusercontent.com/search?q=cache:MmxwdXObZLMJ:www.anoshkin.net/blog/2008/09/09/iphone-crash-logs/+iphone+crash+debig&cd=2&hl=en&ct=clnk I tried.... $ symbolicatecrash report.crash MobileLines.app.dSYM report-with-symbols.crash Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/System/Library/Frameworks/IOKit.framework/Versions/A/IOKit Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/System/Library/PrivateFrameworks/WebCore.framework/WebCore Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/System/Library/Frameworks/Foundation.framework/Foundation Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/usr/lib/libSystem.B.dylib Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/System/Library/PrivateFrameworks/GraphicsServices.framework/GraphicsServices Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/System/Library/Frameworks/UIKit.framework/UIKit Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/System/Library/Frameworks/OpenGLES.framework/MBXGLEngine.bundle/MBXGLEngine Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/System/Library/Frameworks/AudioToolbox.framework/AudioToolbox Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/System/Library/Frameworks/CoreFoundation.framework/CoreFoundation BUT... I didn't result. (find error message) - This directory is located "bulid/Distribution-iphones" - "MYGAME.app" file and "MYGAME.app.dSYM" file is located in same directory. How can i do solve this problem. ? Please help me :) * Crash log (carsh at thread 2 ) Incident Identifier: 95230C2E-CD83-46BF-8DAE-F38BCD46B910 Process: MYGAMELite [303] Path: /var/mobile/Applications/4FB79BEC-2BF0-438B-82A8-C302CD52A85C/MYGAMELite.app/MYGAMELite Identifier: MYGAMELite Version: ??? (???) Code Type: ARM (Native) Parent Process: launchd [1] Date/Time: 2010-06-03 11:43:52.875 +0800 OS Version: iPhone OS 3.1.2 (7D11) Report Version: 104 Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x03e3a002 Crashed Thread: 2 Thread 2 Crashed: 0 AudioToolbox 0x330d708c AU3DMixerEmbedded::SumInput16(unsigned long, AudioBufferList const&, AudioBufferList const&, unsigned long, float, unsigned long) 1 AudioToolbox 0x330d89a0 AU3DMixerEmbedded::Render(unsigned long&, AudioTimeStamp const&, unsigned long) 2 AudioToolbox 0x32fe6bb8 AUBase::DoRender(unsigned long&, AudioTimeStamp const&, unsigned long, unsigned long, AudioBufferList&) 3 AudioToolbox 0x32fe6504 Render 4 AudioToolbox 0x330160b8 AUInputElement::PullInput(unsigned long&, AudioTimeStamp const&, unsigned long, unsigned long) 5 AudioToolbox 0x33023fa8 AUInputFormatConverter2::InputProc(OpaqueAudioConverter*, unsigned long*, AudioBufferList*, AudioStreamPacketDescription*, void) 6 AudioToolbox 0x32fe4b60 AudioConverterChain::CallInputProc(unsigned long) 7 AudioToolbox 0x32fe4a5c AudioConverterChain::FillBufferFromInputProc(unsigned long*, CABufferList*) 8 AudioToolbox 0x32fe4790 BufferedAudioConverter::GetInputBytes(unsigned long, unsigned long&, CABufferList const*&) 9 AudioToolbox 0x33023e30 CBRConverter::RenderOutput(CABufferList*, unsigned long, unsigned long&, AudioStreamPacketDescription*) 10 AudioToolbox 0x32fe4284 BufferedAudioConverter::FillBuffer(unsigned long&, AudioBufferList&, AudioStreamPacketDescription*) 11 AudioToolbox 0x32fe44a4 AudioConverterChain::RenderOutput(CABufferList*, unsigned long, unsigned long&, AudioStreamPacketDescription*) 12 AudioToolbox 0x32fe4284 BufferedAudioConverter::FillBuffer(unsigned long&, AudioBufferList&, AudioStreamPacketDescription*) 13 AudioToolbox 0x32fe3f10 AudioConverterFillComplexBuffer 14 AudioToolbox 0x33023844 AUConverterBase::RenderBus(unsigned long&, AudioTimeStamp const&, unsigned long, unsigned long) 15 AudioToolbox 0x330ce928 AURemoteIO::RenderBus(unsigned long&, AudioTimeStamp const&, unsigned long, unsigned long) 16 AudioToolbox 0x32fe6bb8 AUBase::DoRender(unsigned long&, AudioTimeStamp const&, unsigned long, unsigned long, AudioBufferList&) 17 AudioToolbox 0x330cf308 AURemoteIO::PerformIO(int, unsigned int, unsigned int, AQTimeStamp const&, AQTimeStamp const&) 18 AudioToolbox 0x330cf4cc AURIOCallbackReceiver_PerformIOSync 19 AudioToolbox 0x330c76fc _XPerformIOSync 20 AudioToolbox 0x330181d8 mshMIGPerform 21 AudioToolbox 0x3309cec8 MSHMIGDispatchMessage 22 AudioToolbox 0x330d48d4 AURemoteIO::IOThread::Entry(void*) 23 AudioToolbox 0x32fc9f20 CAPThread::Entry(CAPThread*) 24 libSystem.B.dylib 0x30b5b7b0 _pthread_body

    Read the article

  • Problem in running boost eample blocking_udp_echo_client on MacOSX

    - by n179911
    I am trying to run blocking_udp_echo_client on MacOS X http://www.boost.org/doc/libs/1_35_0/doc/html/boost_asio/example/echo/blocking_udp_echo_client.cpp I run it with argument 'localhost 9000' But the program crashes and this is the line in the source which crashes: `udp::socket s(io_service, udp::endpoint(udp::v4(), 0));' this is the stack trace: #0 0x918c3e42 in __kill #1 0x918c3e34 in kill$UNIX2003 #2 0x9193623a in raise #3 0x91942679 in abort #4 0x940d96f9 in __gnu_debug::_Error_formatter::_M_error #5 0x0000e76e in __gnu_debug::_Safe_iterator::op_base* , __gnu_debug_def::list::op_base*, std::allocator::op_base* ::_Safe_iterator at safe_iterator.h:124 #6 0x00014729 in boost::asio::detail::hash_map::op_base*::bucket_type::bucket_type at hash_map.hpp:277 #7 0x00019e97 in std::_Construct::op_base*::bucket_type, boost::asio::detail::hash_map::op_base*::bucket_type at stl_construct.h:81 #8 0x0001a457 in std::__uninitialized_fill_n_aux::op_base*::bucket_type*, __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type , unsigned long, boost::asio::detail::hash_map::op_base*::bucket_type at stl_uninitialized.h:194 #9 0x0001a4e1 in std::uninitialized_fill_n::op_base*::bucket_type*, __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type , unsigned long, boost::asio::detail::hash_map::op_base*::bucket_type at stl_uninitialized.h:218 #10 0x0001a509 in std::__uninitialized_fill_n_a::op_base*::bucket_type*, __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type , unsigned long, boost::asio::detail::hash_map::op_base*::bucket_type, boost::asio::detail::hash_map::op_base*::bucket_type at stl_uninitialized.h:310 #11 0x0001aa34 in __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type ::_M_fill_insert at vector.tcc:365 #12 0x0001acda in __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type ::insert at stl_vector.h:658 #13 0x0001ad81 in __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type ::resize at stl_vector.h:427 #14 0x0001ae3a in __gnu_debug_def::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type ::resize at vector:169 #15 0x0001b7be in boost::asio::detail::hash_map::op_base*::rehash at hash_map.hpp:221 #16 0x0001bbeb in boost::asio::detail::hash_map::op_base*::hash_map at hash_map.hpp:67 #17 0x0001bc74 in boost::asio::detail::reactor_op_queue::reactor_op_queue at reactor_op_queue.hpp:42 #18 0x0001bd24 in boost::asio::detail::kqueue_reactor::kqueue_reactor at kqueue_reactor.hpp:86 #19 0x0001c000 in boost::asio::detail::service_registry::use_service at service_registry.hpp:109 #20 0x0001c14d in boost::asio::use_service at io_service.ipp:195 #21 0x0001c26d in boost::asio::detail::reactive_socket_service ::reactive_socket_service at reactive_socket_service.hpp:111 #22 0x0001c344 in boost::asio::detail::service_registry::use_service at service_registry.hpp:109 #23 0x0001c491 in boost::asio::use_service at io_service.ipp:195 #24 0x0001c4d5 in boost::asio::datagram_socket_service::datagram_socket_service at datagram_socket_service.hpp:95 #25 0x0001c59e in boost::asio::detail::service_registry::use_service at service_registry.hpp:109 #26 0x0001c6eb in boost::asio::use_service at io_service.ipp:195 #27 0x0001c711 in boost::asio::basic_io_object ::basic_io_object at basic_io_object.hpp:72 #28 0x0001c783 in boost::asio::basic_socket ::basic_socket at basic_socket.hpp:108 #29 0x0001c865 in boost::asio::basic_datagram_socket ::basic_datagram_socket at basic_datagram_socket.hpp:107 #30 0x000027bc in main at main.cpp:32 This is the gdb output: (gdb) continue /Developer/SDKs/MacOSX10.5.sdk/usr/include/c++/4.0.0/debug/safe_iterator.h:127: error: attempt to copy-construct an iterator from a singular iterator. Objects involved in the operation: iterator "this" @ 0x0x100420 { type = N11__gnu_debug14_Safe_iteratorIN10__gnu_norm14_List_iteratorISt4pairIiPN5boost4asio6detail16reactor_op_queueIiE7op_baseEEEEN15__gnu_debug_def4listISB_SaISB_EEEEE (mutable iterator); state = singular; } iterator "other" @ 0x0xbfffe8a4 { type = N11__gnu_debug14_Safe_iteratorIN10__gnu_norm14_List_iteratorISt4pairIiPN5boost4asio6detail16reactor_op_queueIiE7op_baseEEEEN15__gnu_debug_def4listISB_SaISB_EEEEE (mutable iterator); state = singular; } Program received signal: “SIGABRT”. (gdb) continue Program received signal: “?”. Does someone has any idea why this example does not work on mac osx? Thank you.

    Read the article

  • Default custom ControlTemplate is not applied when using Style

    - by gehho
    Hi all, I have created a default style for a Button including a custom ControlTemplate like so: <Style TargetType="{x:Type Button}"> <Setter Property="OverridesDefaultStyle" Value="True"/> <Setter Property="Background" Value="White"/> <Setter Property="BorderBrush" Value="Black"/> <!-- ...other property setters... --> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type Button}"> <Grid x:Name="gridMain"> <!-- some content here --> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> This style is added to my shared ResourceDictionary which is loaded by every control. Now, this style/template is applied to all my buttons, as expected, but it is NOT applied to those buttons which locally use a different style. For example, I want to have a certain margin for my "OK", "Apply" and "Cancel" buttons. Therefore, I defined the following style: <Style x:Key="OKApplyCancelStyle" TargetType="{x:Type Button}"> <Setter Property="Margin" Value="4,8"/> <Setter Property="Padding" Value="8,6"/> <Setter Property="MinWidth" Value="100"/> <Setter Property="FontSize" Value="16"/> </Style> ...and applied that style to my buttons using a StaticResource: <Button Content="OK" Style="{StaticResource OKApplyCancelStyle}"/> For me, the expected result would be that the ControlTemplate above would still be applied, using the values for Margin, Padding, MinWidth and FontSize from the "OKApplyCancelStyle". But this is not the case. The default Windows ControlTemplate is used instead, using the values from the style. Is this the typical behavior? Does a local style really override a custom ControlTemplate? If so, how can I achieve my desired behavior? I.e. still use my custom ControlTemplate even when styles are defined locally? Many thanks in advance, gehho.

    Read the article

  • Mouse wheel not scrolling in JDialog

    - by Iulian Serbanoiu
    Hello, I'm facing a frustrating issue. I have an application where the scroll wheel doesn't work in a JDialog class. Here's the code: import javax.swing.*; import java.awt.event.*; public class Failtest extends JFrame { public static void main(String[] args) { new Failtest(); } public Failtest() { super(); setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE); setTitle("FRAME"); JScrollPane sp1 = new JScrollPane(getNewList()); add(sp1); setSize(150, 150); setVisible(true); JDialog d = new JDialog(this, false);// NOT WORKING //JDialog d = new JDialog((JFrame)null, false); // NOT WORKING //JDialog d = new JDialog((JDialog)null, false);// WORKING - WHY? d.setTitle("DIALOG"); d.setDefaultCloseOperation(JDialog.DISPOSE_ON_CLOSE); JScrollPane sp = new JScrollPane(getNewList()); d.add(sp); d.setSize(150, 150); d.setVisible(true); } public JList getNewList() { String objs[] = new String[30]; for(int i=0; i<objs.length; i++) { objs[i] = "Item "+i; } JList l = new JList(objs); return l; } } I found a solution which is present as a comment in the java code - the constructor receiving a (JDialog)null parameter. Can someone enlighten me? My opinion is that this is a java bug. Tested on Windows XP-SP3 with 1 JDK and 2 JREs: D:\Program Files\Java\jdk1.6.0_17\bin>javac -version javac 1.6.0_17 D:\Program Files\Java\jdk1.6.0_17\bin>java -version java version "1.6.0_17" Java(TM) SE Runtime Environment (build 1.6.0_17-b04) Java HotSpot(TM) Client VM (build 14.3-b01, mixed mode, sharing) D:\Program Files\Java\jdk1.6.0_17\bin>cd .. D:\Program Files\Java\jdk1.6.0_17>java -version java version "1.6.0_18" Java(TM) SE Runtime Environment (build 1.6.0_18-b07) Java HotSpot(TM) Client VM (build 16.0-b13, mixed mode, sharing) Thank you in advance, Iulian Serbanoiu PS: The problem is not new - the code is taken from a forum (here) where this problem was also mentioned - but no solutions to it (yet)

    Read the article

  • printing foreign text for PHP on UBUNTU and CENTOS

    - by hao
    Hey guys, I am using domdocuments and using things like $div-nodeValue to obtain certain info from a web page. On my ubuntu machine when i do php crawl.php everything is displayed properly in Chinese (the page is in UTF-8). However on my CENTOS machine using the same code I get æ´å¤åå¸ when I print in the terminal. and when I save it to the database, the characters are also messed up. One thing I noticed is that when I do print $content, both systems display them properly.

    Read the article

  • Get aspect ratio of a monitor

    - by Alexander Stalt
    I want to get aspect ratio of a monitor as two digits : width and height. For example 4 and 3, 5 and 4, 16 and 9. I wrote some code for that task. Maybe it is any easier way to do that ? For example, some library function =\ /// <summary> /// Aspect ratio. /// </summary> public struct AspectRatio { int _height; /// <summary> /// Height. /// </summary> public int Height { get { return _height; } } int _width; /// <summary> /// Width. /// </summary> public int Width { get { return _width; } } /// <summary> /// Ctor. /// </summary> /// <param name="height">Height of aspect ratio.</param> /// <param name="width">Width of aspect ratio.</param> public AspectRatio(int height, int width) { _height = height; _width = width; } } public sealed class Aux { /// <summary> /// Get aspect ratio. /// </summary> /// <returns>Aspect ratio.</returns> public static AspectRatio GetAspectRatio() { int deskHeight = Screen.PrimaryScreen.Bounds.Height; int deskWidth = Screen.PrimaryScreen.Bounds.Width; int gcd = GCD(deskWidth, deskHeight); return new AspectRatio(deskHeight / gcd, deskWidth / gcd); } /// <summary> /// Greatest Common Denominator (GCD). Euclidean algorithm. /// </summary> /// <param name="a">Width.</param> /// <param name="b">Height.</param> /// <returns>GCD.</returns> static int GCD(int a, int b) { return b == 0 ? a : GCD(b, a % b); } }

    Read the article

  • Is possible to make mt.exe embed manifest files correctly in Visual Studio 2008?

    - by Sorin Sbarnea
    I found that mt.exe fails to correctly create and embed manifest files into executables when run inside a VCPROJ. For example the same executable load well on Windows 7 but failed to load on Windows XP. The manifest was embedded and correct. After I spend lots of hours searching for possible reasons and solution I modified the project settings to generate the manifest outside the exe file. Now it works on both systems. Here are the examples for debug builds. With embed disabled: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false"></requestedExecutionLevel> </requestedPrivileges> </security> </trustInfo> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.DebugCRT" version="9.0.21022.8" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b"></assemblyIdentity> </dependentAssembly> </dependency> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.DebugMFC" version="9.0.21022.8" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b"></assemblyIdentity> </dependentAssembly> </dependency> </assembly> This is with embed enabled: <?xml version="1.0" encoding="UTF-8" standalone="yes" ?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false" /> </requestedPrivileges> </security> </trustInfo> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.DebugCRT" version="9.0.21022.8" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b" /> </dependentAssembly> </dependency> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.DebugMFC" version="9.0.21022.8" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b" /> </dependentAssembly> </dependency> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.Windows.Common-Controls" version="6.0.0.0" processorArchitecture="x86" publicKeyToken="6595b64144ccf1df" language="*" /> </dependentAssembly> </dependency> </assembly> If you compare them the second one adds common controls (I don't know from where) and also it is a small difference with the syntax of requestedExecutionLevel tag.

    Read the article

  • ListView with button and check mark?

    - by jgelderloos
    So I have looked through a lot of other answers but have not been able to get my app to work how I want it. I basically want the list view that has the text and check mark to the right, but then an addition button to the left. Right now my list view shows up but the check image is never changed. Selector: <?xml version="1.0" encoding="utf-8"?> <selector xmlns:android="http://schemas.android.com/apk/res/android"> <item android:state_selected="true" android:drawable="@drawable/accept_on" /> <item android:drawable="@drawable/accept" /> </selector> Row xml: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/layout" android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="fill_parent" android:padding="10dp" android:background="#EEE"> <ImageButton android:id="@+id/goToMapButton" android:src="@drawable/go_to_map" android:layout_width="wrap_content" android:layout_height="wrap_content" android:gravity="left" /> <TextView android:id="@+id/itemName" android:layout_width="fill_parent" android:layout_height="fill_parent" android:gravity="center_vertical" android:textColor="#000000" android:layout_marginTop="5dp" android:layout_marginBottom="5dp" android:layout_weight="1" /> <Button android:id="@+id/checkButton" android:background="@drawable/item_selector" android:layout_width="wrap_content" android:layout_height="wrap_content" android:gravity="right" /> </LinearLayout> MapAdapter: import android.content.Context; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.ArrayAdapter; import android.widget.Button; import android.widget.ImageButton; import android.widget.LinearLayout; import android.widget.TextView; public class MapAdapter extends ArrayAdapter<String>{ Context context; int layoutResourceId; String data[] = null; LayoutInflater inflater; LinearLayout layout; public MapAdapter(Context context, int layoutResourceId, String[] data) { super(context, layoutResourceId, data); this.layoutResourceId = layoutResourceId; this.context = context; this.data = data; inflater = LayoutInflater.from(context); } @Override public String getItem(int position) { return data[position]; } @Override public View getView(int position, View convertView, ViewGroup parent) { ViewHolder holder = new ViewHolder(); if(convertView == null) { convertView = inflater.inflate(R.layout.map_item_row, null); layout = (LinearLayout)convertView.findViewById(R.id.layout); holder.map = (ImageButton)convertView.findViewById(R.id.goToMapButton); holder.name = (TextView)convertView.findViewById(R.id.itemName); //holder.check = (Button)convertView.findViewById(R.id.checkButton); convertView.setTag(holder); } else { holder = (ViewHolder) convertView.getTag(); } layout.setBackgroundColor(0x00000004); holder.name.setText(getItem(position)); return convertView; } static class ViewHolder { ImageButton map; TextView name; Button check; } }

    Read the article

  • Install Trac on 64bits Windows 7

    - by Tufo
    I'm configuring a new Developing Server that came with Windows 7 64bits. It must have installed Trac with Subversion integration. I install Subversion with VisualSVN 2.1.1, clients with TortoiseSVN 1.6.7 and AnkhSVN 2.1.7 for Visual Studio 2008 SP1 integration. All works fine! my problem begun when going to Trac installation. I install python 2.6 all fine. Trac hasn't a x64 windows installer, so I installed it manually by compiling it with python console (C:\Python26\python.exe C:/TRAC/setup.py install). After that, I can create TRAC projects normally, the Trac core is working fine. And so the problem begins, lets take a look at the Trac INSTALL file: Requirements To install Trac, the following software packages must be installed: Python, version = 2.3. Subversion, version = 1.0. (= 1.1.xrecommended) Subversion SWIG Python bindings (not PySVN). PySQLite,version 1.x (for SQLite 2.x) or version 2.x (for SQLite 3.x) Clearsilver, version = 0.9.3 (0.9.14 recommended) Python: OK Subverion: OK Subversion SWIG Python bindings (not PySVN): Here I face the first issue, he asks me for 'cd' to the swig directory and run the 'configure' file, and the result is: C:\swigwin-1.3.40> c:\python26\python.exe configure File "configure", line 16 DUALCASE=1; export DUALCASE # for MKS sh ^ SyntaxError: invalid syntax PySQLite, version 1.x (for SQLite 2.x) or version 2.x (for SQLite 3.x): Don't need, as Python 2.6 comes with SQLLite Clearsilver, version = 0.9.3 (0.9.14 recommended): Second issue, Clearsilver only has 32bit installer wich does not recognize python installation (as registry keys are in different places from 32 to 64 bits). So I try to manually install it with python console. It returns me a error of the same kind as SWIG: C:\clearsilver-0.10.5>C:\python26\python.exe ./configure File "./configure", line 13 if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then ^ SyntaxError: invalid syntax When I simulate a web server using the "TRACD" command, it runs fine when I disable svn support but when I try to open the web page it shows me a error regarding ClearSilver is not installed for generating the html content. AND (for making me more happy) This TRAC will run over IIS7, I mustn't install Apache... I'm nearly crazy with this issue... HELP!!!

    Read the article

  • To use AES with 256 bits in inbuild java 1.4 api.

    - by sahil garg
    I am able to encrypt with AES 128 but with more key length it fails. code using AES 128 is as below. import java.security.*; import javax.crypto.*; import javax.crypto.spec.*; import java.io.*; /** * This program generates a AES key, retrieves its raw bytes, and * then reinstantiates a AES key from the key bytes. * The reinstantiated key is used to initialize a AES cipher for * encryption and decryption. */ public class AES { /** * Turns array of bytes into string * * @param buf Array of bytes to convert to hex string * @return Generated hex string */ public static String asHex (byte buf[]) { StringBuffer strbuf = new StringBuffer(buf.length * 2); int i; for (i = 0; i < buf.length; i++) { if (((int) buf[i] & 0xff) < 0x10) strbuf.append("0"); strbuf.append(Long.toString((int) buf[i] & 0xff, 16)); } return strbuf.toString(); } public static void main(String[] args) throws Exception { String message="This is just an example"; // Get the KeyGenerator KeyGenerator kgen = KeyGenerator.getInstance("AES"); kgen.init(128); // 192 and 256 bits may not be available // Generate the secret key specs. SecretKey skey = kgen.generateKey(); byte[] raw = skey.getEncoded(); SecretKeySpec skeySpec = new SecretKeySpec(raw, "AES"); // Instantiate the cipher Cipher cipher = Cipher.getInstance("AES"); cipher.init(Cipher.ENCRYPT_MODE, skeySpec); byte[] encrypted =cipher.doFinal("welcome".getBytes()); System.out.println("encrypted string: " + asHex(encrypted)); cipher.init(Cipher.DECRYPT_MODE, skeySpec); byte[] original = cipher.doFinal(encrypted); String originalString = new String(original); System.out.println("Original string: " + originalString + " " + asHex(original)); } }

    Read the article

  • More advanced usage of interfaces

    - by owca
    To be honest I'm not quite sure if I understand the task myself :) I was told to create class MySimpleIt, that implements Iterator and Iterable and will allow to run the provided test code. Arguments and variables of objects cannot be either Collections or arrays. The code : MySimpleIt msi=new MySimple(10,100, MySimpleIt.PRIME_NUMBERS); for(int el: msi) System.out.print(el+" "); System.out.println(); msi.setType(MySimpleIterator.ODD_NUMBERS); msi.setLimits(15,30); for(int el: msi) System.out.print(el+" "); System.out.println(); msi.setType(MySimpleIterator.EVEN_NUMBERS); for(int el: msi) System.out.print(el+" "); System.out.println(); The result I should obtain : 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97 15 17 19 21 23 25 27 29 16 18 20 22 24 26 28 30 And here's my code : import java.util.Iterator; interface MySimpleIterator{ static int ODD_NUMBERS=0; static int EVEN_NUMBERS = 1; static int PRIME_NUMBERS = 2; int setType(int i); } public class MySimpleIt implements Iterable, Iterator, MySimpleIterator { public MySimple my; public MySimpleIt(MySimple m){ my = m; } public int setType(int i){ my.numbers = i; return my.numbers; } public void setLimits(int d, int u){ my.down = d; my.up = u; } public Iterator iterator(){ Iterator it = this.iterator(); return it; } public void remove(){ } public Object next(){ Object o = new Object(); return o; } public boolean hasNext(){ return true; } } class MySimple { public int down; public int up; public int numbers; public MySimple(int d, int u, int n){ down = d; up = u; numbers = n; } } In the test code I have error in line when creating MySimpleIt msi object, as it finds MySimple instead of MySimpleIt. Also I have errors in for-each loops, because compiler wants 'ints' there instead of Object. Anyone has any idea on how to solve it ?

    Read the article

  • H.264 over RTP - Identify SPS and PPS Frames

    - by Toby
    I have a raw H.264 Stream from an IP Camera packed in RTP frames. I want to get raw H.264 data into a file so I can convert it with ffmpeg. So when I want to write the data into my raw H.264 file I found out it has to look like this: 00 00 01 [SPS] 00 00 01 [PPS] 00 00 01 [NALByte] [PAYLOAD RTP Frame 1] // Payload always without the first 2 Bytes -> NAL [PAYLOAD RTP Frame 2] [... until PAYLOAD Frame with Mark Bit received] // From here its a new Video Frame 00 00 01 [NAL BYTE] [PAYLOAD RTP Frame 1] .... So I get the SPS and the PPS from the Session Description Protocol out of my preceding RTSP communication. Additionally the camera sends the SPS and the PPSin two single messages before starting with the video stream itself. So I capture the messages in this order: 1. Preceding RTSP Communication here ( including SDP with SPS and PPS ) 2. RTP Frame with Payload: 67 42 80 28 DA 01 40 16 C4 // This is the SPS 3. RTP Frame with Payload: 68 CE 3C 80 // This is the PPS 4. RTP Frame with Payload: ... // Video Data Then there come some Frames with Payload and at some point a RTP Frame with the Marker Bit = 1. This means ( if I got it right) that I have a complete video frame. Afer this I write the Prefix Sequence ( 00 00 01 ) and the NALfrom the payload again and go on with the same procedure. Now my camera sends me after every 8 complete Video Frames the SPS and the PPS again. ( Again in two RTP Frames, as seen in the example above ). I know that especially the PPS can change in between streaming but that's not the problem. My questions are now: 1. Do I need to write the SPS/PPS every 8th Video Frame? If my SPS and my PPS don't change it should be enough to have them written at the very beginning of my file and nothing more? 2. How to distinguish between SPS/PPS and normal RTP Frames? In my C++ Code which parses the transmitted data I need make a difference between the RTP Frames with normal Payload an the ones carrying the SPS/PPS. How can I distinguish them? Okay the SPS/PPS frames are usually way smaller, but that's not a save call to rely on. Because if I ignore them I need to know which data I can throw away, or if I need to write them I need to put the 00 00 01 Prefix in front of them. ? Or is it a fixed rule that they occur every 8th Video Frame?

    Read the article

< Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >