Search Results

Search found 45581 results on 1824 pages for 'value objects'.

Page 376/1824 | < Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >

  • Wpf Listbox and Togglebutton

    - by Tan
    Hi iam using a listbox to show a list of items. in the listbox i ahve an togglebutton on every item. When i click on the toggle button the state of the togglebutton is pressed. But when i am scrolling down in the listbox and scolls up again. The togglebutton state is not pressed. How can i prevent this please help. Heres my itemtemplate <ListBox.ItemTemplate> <DataTemplate> <StackPanel Margin="0,3,0,0"> <Border BorderBrush="Black" BorderThickness="1,1,1,1"> <Border.Background> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0" MappingMode="RelativeToBoundingBox"> <GradientStop Color="#FFECECEC" Offset="1"/> <GradientStop Color="#FFE8E8E8"/> <GradientStop Color="#FFBDBDBD" Offset="0.153"/> <GradientStop Color="#FFE8E8E8" Offset="0.904"/> </LinearGradientBrush> </Border.Background> <Border.Style> <Style> <Style.Triggers> <DataTrigger Binding="{Binding Path=IsSelected, RelativeSource={RelativeSource Mode=FindAncestor,AncestorType={x:Type ListBoxItem}}}" Value="True"> <Setter Property="Border.Height" Value="100"/> <Setter Property="Border.Background"> <Setter.Value> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0" MappingMode="RelativeToBoundingBox"> <GradientStop Color="DarkGray" Offset="1"/> <GradientStop Color="#FFE8E8E8"/> <GradientStop Color="#FFBDBDBD" Offset="0.153"/> <GradientStop Color="DarkGray" Offset="0.904"/> </LinearGradientBrush> </Setter.Value> </Setter> </DataTrigger> </Style.Triggers> </Style> </Border.Style> <StackPanel Orientation="Horizontal" VerticalAlignment="Center"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="500"/> <ColumnDefinition Width="100"/> <ColumnDefinition Width="55"/> </Grid.ColumnDefinitions> <!--Pick number--> <StackPanel Grid.Column="0" VerticalAlignment="Center" Orientation="Vertical"> <TextBlock Text="{Binding Path=FtgNamn}" FontWeight="Bold" FontSize="22pt" FontFamily="Calibri"/> <TextBlock Text="{Binding Path=LevsAttBeskr}" FontSize="18pt" FontFamily="Calibri"/> </StackPanel> <!--Pick Quantity--> <StackPanel Grid.Column="1" VerticalAlignment="Center"> <TextBlock Text="{Binding Path=Antal}" FontSize="44pt" FontFamily="Calibri"/> </StackPanel> <!-- Checkbox--> <StackPanel Grid.Column="2" VerticalAlignment="Center" HorizontalAlignment="Center"> <ToggleButton Name="Check" Width="40" Height="40" Click="Check_Click" Tag="{Binding Path=Plocklista}"> <ToggleButton.Style> <Style TargetType="ToggleButton"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type ToggleButton}"> <Border x:Name="InnerBorder" Background="White" BorderBrush="Black" BorderThickness="1"/> <ControlTemplate.Triggers> <Trigger Property="IsChecked" Value="True"> <Setter TargetName="InnerBorder" Property="Background"> <Setter.Value> <ImageBrush ImageSource="/Images/button_ok.png"/> </Setter.Value> </Setter> <Setter TargetName="InnerBorder" Property="BorderThickness" Value="0"/> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> </ToggleButton.Style> </ToggleButton> </StackPanel> </Grid> <Border BorderBrush="Darkgray" BorderThickness="0,0,1,0"> </Border> <TextBlock Width="100" Text="{Binding Path=Quantity}" FontSize="44pt" FontFamily="Calibri"/> <CheckBox Width="78"/> </StackPanel> </Border> </StackPanel> </DataTemplate>

    Read the article

  • Custom HTML attributes on SelectListItems in MVC2?

    - by blesh
    I have a need to add custom HTML attributes, specifically classes or styles to option tags in the selects generated by Html.DropDownFor(). I've been playing with it, and for the life of me I can't figure out what I need to do to get what I need working. Assuming I have a list of colors that I'm generating the dropdown for, where the option value is the color's identifier, and the text is the name... here's what I'd like to be able to see as output: <select name="Color"> <option value="1" style="background:#ff0000">Red</option> <option value="2" style="background:#00ff00">Green</option> <option value="3" style="background:#0000ff">Blue</option> <!-- more here --> <option value="25" style="background:#f00f00">Foo Foo</option> </select

    Read the article

  • How to ignore timezone of DateTime in .NET WCF client?

    - by Net_Dev
    WCF client is receiving a Date value from a Java web service where the date sent to the client in XML is : <sampleDate>2010-05-10+14:00</sampleDate> Now the WCF client receiving this date is in timezone (+08:00) and when the client deserialises the Date value it is converted into the following DateTime value : 2010-05-09 18:00 +08:00 However we would like to ignore the +14:00 being sent from the server so that the serialised Date value in the client is : 2010-05-10 Note that the +14:00 is not consistent and may be +10:00, +11:00 etc so it is not possible to use DateTime conversions on the client side to get the desired date value. How can this be easily achieved in WCF? Thanks in advance.

    Read the article

  • Problem sub-total Matrix with rdlc report in vb.NET

    - by Keven
    Hi everyone, I have a matrix and I need to add the money earned this year and past years. However, I must remove the money spent in past years. I must have the separate amount per year and the total of these amounts. This is what gives my matrix: Year = Fields!Year.value =formatnumber((sum(Fields!Results.Value))-(sum(iif( Fields!Year.value & Parameters!choosedYear.Value, Fields!Moneyspent.value,0))), 2) & "$" However, the subtotal gives me an error. What should I do? P.S.: I already found that the subtotal gives me an error because it's not in the scope of the rowgroup1, but is there a way to get the scope in the subtotal? or can anybody find another way to do it?

    Read the article

  • Is Berkeley DB a NoSQL solution?

    - by Gregory Burd
    Berkeley DB is a library. To use it to store data you must link the library into your application. You can use most programming languages to access the API, the calls across these APIs generally mimic the Berkeley DB C-API which makes perfect sense because Berkeley DB is written in C. The inspiration for Berkeley DB was the DBM library, a part of the earliest versions of UNIX written by AT&T's Ken Thompson in 1979. DBM was a simple key/value hashtable-based storage library. In the early 1990s as BSD UNIX was transitioning from version 4.3 to 4.4 and retrofitting commercial code owned by AT&T with unencumbered code, it was the future founders of Sleepycat Software who wrote libdb (aka Berkeley DB) as the replacement for DBM. The problem it addressed was fast, reliable local key/value storage. At that time databases almost always lived on a single node, even the most sophisticated databases only had simple fail-over two node solutions. If you had a lot of data to store you would choose between the few commercial RDBMS solutions or to write your own custom solution. Berkeley DB took the headache out of the custom approach. These basic market forces inspired other DBM implementations. There was the "New DBM" (ndbm) and the "GNU DBM" (GDBM) and a few others, but the theme was the same. Even today TokyoCabinet calls itself "a modern implementation of DBM" mimicking, and improving on, something first created over thirty years ago. In the mid-1990s, DBM was the name for what you needed if you were looking for fast, reliable local storage. Fast forward to today. What's changed? Systems are connected over fast, very reliable networks. Disks are cheep, fast, and capable of storing huge amounts of data. CPUs continued to follow Moore's Law, processing power that filled a room in 1990 now fits in your pocket. PCs, servers, and other computers proliferated both in business and the personal markets. In addition to the new hardware entire markets, social systems, and new modes of interpersonal communication moved onto the web and started evolving rapidly. These changes cause a massive explosion of data and a need to analyze and understand that data. Taken together this resulted in an entirely different landscape for database storage, new solutions were needed. A number of novel solutions stepped up and eventually a category called NoSQL emerged. The new market forces inspired the CAP theorem and the heated debate of BASE vs. ACID. But in essence this was simply the market looking at what to trade off to meet these new demands. These new database systems shared many qualities in common. There were designed to address massive amounts of data, millions of requests per second, and scale out across multiple systems. The first large-scale and successful solution was Dynamo, Amazon's distributed key/value database. Dynamo essentially took the next logical step and added a twist. Dynamo was to be the database of record, it would be distributed, data would be partitioned across many nodes, and it would tolerate failure by avoiding single points of failure. Amazon did this because they recognized that the majority of the dynamic content they provided to customers visiting their web store front didn't require the services of an RDBMS. The queries were simple, key/value look-ups or simple range queries with only a few queries that required more complex joins. They set about to use relational technology only in places where it was the best solution for the task, places like accounting and order fulfillment, but not in the myriad of other situations. The success of Dynamo, and it's design, inspired the next generation of Non-SQL, distributed database solutions including Cassandra, Riak and Voldemort. The problem their designers set out to solve was, "reliability at massive scale" so the first focal point was distributed database algorithms. Underneath Dynamo there is a local transactional database; either Berkeley DB, Berkeley DB Java Edition, MySQL or an in-memory key/value data structure. Dynamo was an evolution of local key/value storage onto networks. Cassandra, Riak, and Voldemort all faced similar design decisions and one, Voldemort, choose Berkeley DB Java Edition for it's node-local storage. Riak at first was entirely in-memory, but has recently added write-once, append-only log-based on-disk storage similar type of storage as Berkeley DB except that it is based on a hash table which must reside entirely in-memory rather than a btree which can live in-memory or on disk. Berkeley DB evolved too, we added high availability (HA) and a replication manager that makes it easy to setup replica groups. Berkeley DB's replication doesn't partitioned the data, every node keeps an entire copy of the database. For consistency, there is a single node where writes are committed first - a master - then those changes are delivered to the replica nodes as log records. Applications can choose to wait until all nodes are consistent, or fire and forget allowing Berkeley DB to eventually become consistent. Berkeley DB's HA scales-out quite well for read-intensive applications and also effectively eliminates the central point of failure by allowing replica nodes to be elected (using a PAXOS algorithm) to mastership if the master should fail. This implementation covers a wide variety of use cases. MemcacheDB is a server that implements the Memcache network protocol but uses Berkeley DB for storage and HA to replicate the cache state across all the nodes in the cache group. Google Accounts, the user authentication layer for all Google properties, was until recently running Berkeley DB HA. That scaled to a globally distributed system. That said, most NoSQL solutions try to partition (shard) data across nodes in the replication group and some allow writes as well as reads at any node, Berkeley DB HA does not. So, is Berkeley DB a "NoSQL" solution? Not really, but it certainly is a component of many of the existing NoSQL solutions out there. Forgetting all the noise about how NoSQL solutions are complex distributed databases when you boil them down to a single node you still have to store the data to some form of stable local storage. DBMs solved that problem a long time ago. NoSQL has more to do with the layers on top of the DBM; the distributed, sometimes-consistent, partitioned, scale-out storage that manage key/value or document sets and generally have some form of simple HTTP/REST-style network API. Does Berkeley DB do that? Not really. Is Berkeley DB a "NoSQL" solution today? Nope, but it's the most robust solution on which to build such a system. Re-inventing the node-local data storage isn't easy. A lot of people are starting to come to appreciate the sophisticated features found in Berkeley DB, even mimic them in some cases. Could Berkeley DB grow into a NoSQL solution? Absolutely. Our key/value API could be extended over the net using any of a number of existing network protocols such as memcache or HTTP/REST. We could adapt our node-local data partitioning out over replicated nodes. We even have a nice query language and cost-based query optimizer in our BDB XML product that we could reuse were we to build out a document-based NoSQL-style product. XML and JSON are not so different that we couldn't adapt one to work with the other interchangeably. Without too much effort we could add what's missing, we could jump into this No SQL market withing a single product development cycle. Why isn't Berkeley DB already a NoSQL solution? Why aren't we working on it? Why indeed...

    Read the article

  • Why won't this Schema validate this XML file? [Source of both included - quite small]

    - by Sergio Tapia
    The XML file: <Lista count="3"> <Pelicula nombre="Jurasic Park 3"> <Genero>Drama</ Genero> <Director sexo="M">Esteven Spielberg</Director> <Temporada> <Anho>2002</Anho> <Semestre>Verano<Semestre> </Temporada> </Pelicula> <Pelicula nombre="Maldiciones"> <Genero>Ficcion</ Genero> <Director sexo="M">Pedro Almodovar</Director> <Temporada> <Anho>2002</Anho> <Semestre>Verano<Semestre> </Temporada> </Pelicula> <Pelicula nombre="Amor en New York"> <Genero>Romance</Genero> <Director sexo="F">Katia Hertz</Director> <Temporada> <Anho>2002</Anho> <Semestre>Verano<Semestre> </Temporada> </Pelicula> </Lista count="3"> And here's the XML Schema file I made, it's not working. :\ <xsd:complexType name="Lista"> <xsd:attribute name="count" type="xsd:integer" /> <xsd:complexContent> <xsd:element name="Pelicula" type="xsd:string"> <xsd:attribute name="nombre" type="xsd:string" /> <xsd:complexType> <xsd:sequence> <xsd:element name="Genero" type="generoType"/> <xsd:element name="Director" type="directorType"> <xsd:attribute name="sexo" type="sexoType"/> </xsd:element> </xsd:element name="Temporada"> <xsd:complexType> <xsd:sequence> <xsd:element name="Anho" type="anhoType" /> <xsd:element name="Semestre" type="semestreType" /> </xsd:sequence> </xsd:complexType> <xsd:element> </xsd:element> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:complexContent> </xsd:complexType> <xsd:simpleType name="sexoType"> <xsd:restriction base="xsd:string"> <xsd:enumeration value="F"/> <xsd:enumeration value="M"/> </xsd:restriction> </xsd:simpleType> <xsd:simpleType name="directorType"> <xsd:restriction base="xsd:string" /> </xsd:simpleType> <xsd:simpleType name="generoType"> <xsd:restriction base="xsd:string"> <xsd:enumeration value="Drama"/> <xsd:enumeration value="Accion"/> <xsd:enumeration value="Romance"/> <xsd:enumeration value="Ficcion"/> </xsd:restriction> </xsd:simpleType> <xsd:simpleType name="semestreType"> <xsd:restriction base="xsd:string"> <xsd:enumeration value="Verano"/> <xsd:enumeration value="Invierno"/> </xsd:restriction> </xsd:simpleType> <xsd:simpleType name="anhoType"> <xsd:restriction base="xsd:integer"> <xsd:minInclusive value="1970"/> <xsd:maxInclusive value="2020"/> </xsd:restriction> </xsd:simpleType>

    Read the article

  • Tricks and Optimizations for you Sitecore website

    - by amaniar
    When working with Sitecore there are some optimizations/configurations I usually repeat in order to make my app production ready. Following is a small list I have compiled from experience, Sitecore documentation, communicating with Sitecore Engineers etc. This is not supposed to be technically complete and might not be fit for all environments.   Simple configurations that can make a difference: 1) Configure Sitecore Caches. This is the most straight forward and sure way of increasing the performance of your website. Data and item cache sizes (/databases/database/ [id=web] ) should be configured as needed. You may start with a smaller number and tune them as needed. <cacheSizes hint="setting"> <data>300MB</data> <items>300MB</items> <paths>5MB</paths> <standardValues>5MB</standardValues> </cacheSizes> Tune the html, registry etc cache sizes for your website.   <cacheSizes> <sites> <website> <html>300MB</html> <registry>1MB</registry> <viewState>10MB</viewState> <xsl>5MB</xsl> </website> </sites> </cacheSizes> Tune the prefetch cache settings under the App_Config/Prefetch/ folder. Sample /App_Config/Prefetch/Web.Config: <configuration> <cacheSize>300MB</cacheSize> <!--preload items that use this template--> <template desc="mytemplate">{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX}</template> <!--preload this item--> <item desc="myitem">{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX }</item> <!--preload children of this item--> <children desc="childitems">{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX}</children> </configuration> Break your page into sublayouts so you may cache most of them. Read the caching configuration reference: http://sdn.sitecore.net/upload/sitecore6/sc62keywords/cache_configuration_reference_a4.pdf   2) Disable Analytics for the Shell Site <site name="shell" virtualFolder="/sitecore/shell" physicalFolder="/sitecore/shell" rootPath="/sitecore/content" startItem="/home" language="en" database="core" domain="sitecore" loginPage="/sitecore/login" content="master" contentStartItem="/Home" enableWorkflow="true" enableAnalytics="false" xmlControlPage="/sitecore/shell/default.aspx" browserTitle="Sitecore" htmlCacheSize="2MB" registryCacheSize="3MB" viewStateCacheSize="200KB" xslCacheSize="5MB" />   3) Increase the Check Interval for the MemoryMonitorHook so it doesn’t run every 5 seconds (default). <hook type="Sitecore.Diagnostics.MemoryMonitorHook, Sitecore.Kernel"> <param desc="Threshold">800MB</param> <param desc="Check interval">00:05:00</param> <param desc="Minimum time between log entries">00:01:00</param> <ClearCaches>false</ClearCaches> <GarbageCollect>false</GarbageCollect> <AdjustLoadFactor>false</AdjustLoadFactor> </hook>   4) Set Analytics.PeformLookup (Sitecore.Analytics.config) to false if your environment doesn’t have access to the internet or you don’t intend to use reverse DNS lookup. <setting name="Analytics.PerformLookup" value="false" />   5) Set the value of the “Media.MediaLinkPrefix” setting to “-/media”: <setting name="Media.MediaLinkPrefix" value="-/media" /> Add the following line to the customHandlers section: <customHandlers> <handler trigger="-/media/" handler="sitecore_media.ashx" /> <handler trigger="~/media/" handler="sitecore_media.ashx" /> <handler trigger="~/api/" handler="sitecore_api.ashx" /> <handler trigger="~/xaml/" handler="sitecore_xaml.ashx" /> <handler trigger="~/icon/" handler="sitecore_icon.ashx" /> <handler trigger="~/feed/" handler="sitecore_feed.ashx" /> </customHandlers> Link: http://squad.jpkeisala.com/2011/10/sitecore-media-library-performance-optimization-checklist/   6) Performance counters should be disabled in production if not being monitored <setting name="Counters.Enabled" value="false" />   7) Disable Item/Memory/Timing threshold warnings. Due to the nature of this component, it brings no value in production. <!--<processor type="Sitecore.Pipelines.HttpRequest.StartMeasurements, Sitecore.Kernel" />--> <!--<processor type="Sitecore.Pipelines.HttpRequest.StopMeasurements, Sitecore.Kernel"> <TimingThreshold desc="Milliseconds">1000</TimingThreshold> <ItemThreshold desc="Item count">1000</ItemThreshold> <MemoryThreshold desc="KB">10000</MemoryThreshold> </processor>—>   8) The ContentEditor.RenderCollapsedSections setting is a hidden setting in the web.config file, which by default is true. Setting it to false will improve client performance for authoring environments. <setting name="ContentEditor.RenderCollapsedSections" value="false" />   9) Add a machineKey section to your Web.Config file when using a web farm. Link: http://msdn.microsoft.com/en-us/library/ff649308.aspx   10) If you get errors in the log files similar to: WARN Could not create an instance of the counter 'XXX.XXX' (category: 'Sitecore.System') Exception: System.UnauthorizedAccessException Message: Access to the registry key 'Global' is denied. Make sure the ApplicationPool user is a member of the system “Performance Monitor Users” group on the server.   11) Disable WebDAV configurations on the CD Server if not being used. More: http://sitecoreblog.alexshyba.com/2011/04/disable-webdav-in-sitecore.html   12) Change Log4Net settings to only log Errors on content delivery environments to avoid unnecessary logging. <root> <priority value="ERROR" /> <appender-ref ref="LogFileAppender" /> </root>   13) Disable Analytics for any content item that doesn’t add value. For example a page that redirects to another page.   14) When using Web User Controls avoid registering them on the page the asp.net way: <%@ Register Src="~/layouts/UserControls/MyControl.ascx" TagName="MyControl" TagPrefix="uc2" %> Use Sublayout web control instead – This way Sitecore caching could be leveraged <sc:Sublayout ID="ID" Path="/layouts/UserControls/MyControl.ascx" Cacheable="true" runat="server" />   15) Avoid querying for all children recursively when all items are direct children. Sitecore.Context.Database.SelectItems("/sitecore/content/Home//*"); //Use: Sitecore.Context.Database.GetItem("/sitecore/content/Home");   16) On IIS — you enable static & dynamic content compression on CM and CD More: http://technet.microsoft.com/en-us/library/cc754668%28WS.10%29.aspx   17) Enable HTTP Keep-alive and content expiration in IIS.   18) Use GUID’s when accessing items and fields instead of names or paths. Its faster and wont break your code when things get moved or renamed. Context.Database.GetItem("{324DFD16-BD4F-4853-8FF1-D663F6422DFF}") Context.Item.Fields["{89D38A8F-394E-45B0-826B-1A826CF4046D}"]; //is better than Context.Database.GetItem("/Home/MyItem") Context.Item.Fields["FieldName"]   Hope this helps.

    Read the article

  • Element is already the child of another element.

    - by Erica
    I get the folowing error in my Silverlight application. But i cant figure out what control it is that is the problem. If i debug it don't break on anything in the code, it just fails in this framework callstack with only framework code. Is there any way to get more information on what part of a Silverlight app that is the problem in this case. Message: Sys.InvalidOperationException: ManagedRuntimeError error #4004 in control 'Xaml1': System.InvalidOperationException: Element is already the child of another element. at MS.Internal.XcpImports.CheckHResult(UInt32 hr) at MS.Internal.XcpImports.Collection_AddValue[T](PresentationFrameworkCollection1 collection, CValue value) at MS.Internal.XcpImports.Collection_AddDependencyObject[T](PresentationFrameworkCollection1 collection, DependencyObject value) at System.Windows.PresentationFrameworkCollection1.AddDependencyObject(DependencyObject value) at System.Windows.Controls.UIElementCollection.AddInternal(UIElement value) at System.Windows.PresentationFrameworkCollection1.Add(T value) at System.Windows.Controls.AutoCompleteBox.OnApplyTemplate() at System.Windows.FrameworkElement.OnApplyTemplate(IntPtr nativeTarget)

    Read the article

  • Why won't this Schema validate this XML file?

    - by Sergio Tapia
    The XML file: <Lista count="3"> <Pelicula nombre="Jurasic Park 3"> <Genero>Drama</Genero> <Director sexo="M">Esteven Spielberg</Director> <Temporada> <Anho>2002</Anho> <Semestre>Verano</Semestre> </Temporada> </Pelicula> <Pelicula nombre="Maldiciones"> <Genero>Ficcion</Genero> <Director sexo="M">Pedro Almodovar</Director> <Temporada> <Anho>2002</Anho> <Semestre>Verano</Semestre> </Temporada> </Pelicula> <Pelicula nombre="Amor en New York"> <Genero>Romance</Genero> <Director sexo="F">Katia Hertz</Director> <Temporada> <Anho>2002</Anho> <Semestre>Verano</Semestre> </Temporada> </Pelicula> </Lista> And here's the XML Schema file I made, it's not working. :\ <xsd:complexType name="Lista"> <xsd:attribute name="count" type="xsd:integer" /> <xsd:complexContent> <xsd:element name="Pelicula" type="xsd:string"> <xsd:attribute name="nombre" type="xsd:string" /> <xsd:complexType> <xsd:sequence> <xsd:element name="Genero" type="generoType"/> <xsd:element name="Director" type="directorType"> <xsd:attribute name="sexo" type="sexoType"/> </xsd:element> </xsd:element name="Temporada"> <xsd:complexType> <xsd:sequence> <xsd:element name="Anho" type="anhoType" /> <xsd:element name="Semestre" type="semestreType" /> </xsd:sequence> </xsd:complexType> <xsd:element></xsd:element> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:complexContent> </xsd:complexType> <xsd:simpleType name="sexoType"> <xsd:restriction base="xsd:string"> <xsd:enumeration value="F"/> <xsd:enumeration value="M"/> </xsd:restriction> </xsd:simpleType> <xsd:simpleType name="directorType"> <xsd:restriction base="xsd:string" /> </xsd:simpleType> <xsd:simpleType name="generoType"> <xsd:restriction base="xsd:string"> <xsd:enumeration value="Drama"/> <xsd:enumeration value="Accion"/> <xsd:enumeration value="Romance"/> <xsd:enumeration value="Ficcion"/> </xsd:restriction> </xsd:simpleType> <xsd:simpleType name="semestreType"> <xsd:restriction base="xsd:string"> <xsd:enumeration value="Verano"/> <xsd:enumeration value="Invierno"/> </xsd:restriction> </xsd:simpleType> <xsd:simpleType name="anhoType"> <xsd:restriction base="xsd:integer"> <xsd:minInclusive value="1970"/> <xsd:maxInclusive value="2020"/> </xsd:restriction> </xsd:simpleType>

    Read the article

  • Anti-Forgery Request Helpers for ASP.NET MVC and jQuery AJAX

    - by Dixin
    Background To secure websites from cross-site request forgery (CSRF, or XSRF) attack, ASP.NET MVC provides an excellent mechanism: The server prints tokens to cookie and inside the form; When the form is submitted to server, token in cookie and token inside the form are sent in the HTTP request; Server validates the tokens. To print tokens to browser, just invoke HtmlHelper.AntiForgeryToken():<% using (Html.BeginForm()) { %> <%: this.Html.AntiForgeryToken(Constants.AntiForgeryTokenSalt)%> <%-- Other fields. --%> <input type="submit" value="Submit" /> <% } %> This invocation generates a token then writes inside the form:<form action="..." method="post"> <input name="__RequestVerificationToken" type="hidden" value="J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP" /> <!-- Other fields. --> <input type="submit" value="Submit" /> </form> and also writes into the cookie: __RequestVerificationToken_Lw__= J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP When the above form is submitted, they are both sent to server. In the server side, [ValidateAntiForgeryToken] attribute is used to specify the controllers or actions to validate them:[HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult Action(/* ... */) { // ... } This is very productive for form scenarios. But recently, when resolving security vulnerabilities for Web products, some problems are encountered. Specify validation on controller (not on each action) The server side problem is, It is expected to declare [ValidateAntiForgeryToken] on controller, but actually it has be to declared on each POST actions. Because POST actions are usually much more then controllers, this is a little crazy Problem Usually a controller contains actions for HTTP GET and actions for HTTP POST requests, and usually validations are expected for HTTP POST requests. So, if the [ValidateAntiForgeryToken] is declared on the controller, the HTTP GET requests become invalid:[ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public class SomeController : Controller // One [ValidateAntiForgeryToken] attribute. { [HttpGet] public ActionResult Index() // Index() cannot work. { // ... } [HttpPost] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] public ActionResult PostAction2(/* ... */) { // ... } // ... } If browser sends an HTTP GET request by clicking a link: http://Site/Some/Index, validation definitely fails, because no token is provided. So the result is, [ValidateAntiForgeryToken] attribute must be distributed to each POST action:public class SomeController : Controller // Many [ValidateAntiForgeryToken] attributes. { [HttpGet] public ActionResult Index() // Works. { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction2(/* ... */) { // ... } // ... } This is a little bit crazy, because one application can have a lot of POST actions. Solution To avoid a large number of [ValidateAntiForgeryToken] attributes (one for each POST action), the following ValidateAntiForgeryTokenAttribute wrapper class can be helpful, where HTTP verbs can be specified:[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)] public class ValidateAntiForgeryTokenWrapperAttribute : FilterAttribute, IAuthorizationFilter { private readonly ValidateAntiForgeryTokenAttribute _validator; private readonly AcceptVerbsAttribute _verbs; public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs) : this(verbs, null) { } public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs, string salt) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = salt }; } public void OnAuthorization(AuthorizationContext filterContext) { string httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride(); if (this._verbs.Verbs.Contains(httpMethodOverride, StringComparer.OrdinalIgnoreCase)) { this._validator.OnAuthorization(filterContext); } } } When this attribute is declared on controller, only HTTP requests with the specified verbs are validated:[ValidateAntiForgeryTokenWrapper(HttpVerbs.Post, Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { // GET actions are not affected. // Only HTTP POST requests are validated. } Now one single attribute on controller turns on validation for all POST actions. Maybe it would be nice if HTTP verbs can be specified on the built-in [ValidateAntiForgeryToken] attribute, which is easy to implemented. Submit token via AJAX The browser side problem is, if server side turns on anti-forgery validation for POST, then AJAX POST requests will fail be default. Problem For AJAX scenarios, when request is sent by jQuery instead of form:$.post(url, { productName: "Tofu", categoryId: 1 // Token is not posted. }, callback); This kind of AJAX POST requests will always be invalid, because server side code cannot see the token in the posted data. Solution The tokens are printed to browser then sent back to server. So first of all, HtmlHelper.AntiForgeryToken() must be called somewhere. Now the browser has token in HTML and cookie. Then jQuery must find the printed token in the HTML, and append token to the data before sending:$.post(url, { productName: "Tofu", categoryId: 1, __RequestVerificationToken: getToken() // Token is posted. }, callback); To be reusable, this can be encapsulated into a tiny jQuery plugin:/// <reference path="jquery-1.4.2.js" /> (function ($) { $.getAntiForgeryToken = function (tokenWindow, appPath) { // HtmlHelper.AntiForgeryToken() must be invoked to print the token. tokenWindow = tokenWindow && typeof tokenWindow === typeof window ? tokenWindow : window; appPath = appPath && typeof appPath === "string" ? "_" + appPath.toString() : ""; // The name attribute is either __RequestVerificationToken, // or __RequestVerificationToken_{appPath}. tokenName = "__RequestVerificationToken" + appPath; // Finds the <input type="hidden" name={tokenName} value="..." /> from the specified. // var inputElements = $("input[type='hidden'][name='__RequestVerificationToken" + appPath + "']"); var inputElements = tokenWindow.document.getElementsByTagName("input"); for (var i = 0; i < inputElements.length; i++) { var inputElement = inputElements[i]; if (inputElement.type === "hidden" && inputElement.name === tokenName) { return { name: tokenName, value: inputElement.value }; } } return null; }; $.appendAntiForgeryToken = function (data, token) { // Converts data if not already a string. if (data && typeof data !== "string") { data = $.param(data); } // Gets token from current window by default. token = token ? token : $.getAntiForgeryToken(); // $.getAntiForgeryToken(window). data = data ? data + "&" : ""; // If token exists, appends {token.name}={token.value} to data. return token ? data + encodeURIComponent(token.name) + "=" + encodeURIComponent(token.value) : data; }; // Wraps $.post(url, data, callback, type). $.postAntiForgery = function (url, data, callback, type) { return $.post(url, $.appendAntiForgeryToken(data), callback, type); }; // Wraps $.ajax(settings). $.ajaxAntiForgery = function (settings) { settings.data = $.appendAntiForgeryToken(settings.data); return $.ajax(settings); }; })(jQuery); In most of the scenarios, it is Ok to just replace $.post() invocation with $.postAntiForgery(), and replace $.ajax() with $.ajaxAntiForgery():$.postAntiForgery(url, { productName: "Tofu", categoryId: 1 }, callback); // Token is posted. There might be some scenarios of custom token. Here $.appendAntiForgeryToken() is provided:data = $.appendAntiForgeryToken(data, token); // Token is already in data. No need to invoke $.postAntiForgery(). $.post(url, data, callback); And there are scenarios that the token is not in the current window. For example, an HTTP POST request can be sent by iframe, while the token is in the parent window. Here window can be specified for $.getAntiForgeryToken():data = $.appendAntiForgeryToken(data, $.getAntiForgeryToken(window.parent)); // Token is already in data. No need to invoke $.postAntiForgery(). $.post(url, data, callback); If you have better solution, please do tell me.

    Read the article

  • RDLC (VS 2010) How to access nested class or arrays on DataObjects

    - by gerard
    How can I access the TD.SubNumber property and Numbers[] on RDLC? I keep getting #Error on my expressions "=Fields!TD.Value.SubNumber" and "=Fields!Numbers.Value(0)". public class TestData { TestSubData tdata = new TestSubData(); public TestSubData TD { get { return tdata; } set { tdata = value; } } string m_Description; public string Description { get { return m_Description; } set { m_Description = value; } } int[] m_Numbers = new int[12]; public int?[] Numbers { get { return m_Numbers; } } } public class TestSubData { int x; public TestSubData() { } public int SubNumber { get { return x; } set { x = value; } } }

    Read the article

  • SQL SERVER – Parsing SSIS Catalog Messages – Notes from the Field #030

    - by Pinal Dave
    [Note from Pinal]: This is a new episode of Notes from the Field series. SQL Server Integration Service (SSIS) is one of the most key essential part of the entire Business Intelligence (BI) story. It is a platform for data integration and workflow applications. The tool may also be used to automate maintenance of SQL Server databases and updates to multidimensional cube data. In this episode of the Notes from the Field series I requested SSIS Expert Andy Leonard to discuss one of the most interesting concepts of SSIS Catalog Messages. There are plenty of interesting and useful information captured in the SSIS catalog and we will learn together how to explore the same. The SSIS Catalog captures a lot of cool information by default. Here’s a query I use to parse messages from the catalog.operation_messages table in the SSISDB database, where the logged messages are stored. This query is set up to parse a default message transmitted by the Lookup Transformation. It’s one of my favorite messages in the SSIS log because it gives me excellent information when I’m tuning SSIS data flows. The message reads similar to: Data Flow Task:Information: The Lookup processed 4485 rows in the cache. The processing time was 0.015 seconds. The cache used 1376895 bytes of memory. The query: USE SSISDB GO DECLARE @MessageSourceType INT = 60 DECLARE @StartOfIDString VARCHAR(100) = 'The Lookup processed ' DECLARE @ProcessingTimeString VARCHAR(100) = 'The processing time was ' DECLARE @CacheUsedString VARCHAR(100) = 'The cache used ' DECLARE @StartOfIDSearchString VARCHAR(100) = '%' + @StartOfIDString + '%' DECLARE @ProcessingTimeSearchString VARCHAR(100) = '%' + @ProcessingTimeString + '%' DECLARE @CacheUsedSearchString VARCHAR(100) = '%' + @CacheUsedString + '%' SELECT operation_id , SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1))) AS LookupRowsCount , SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1))) AS LookupProcessingTime , CASE WHEN (CONVERT(numeric(3,3),SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1))))) = 0 THEN 0 ELSE CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))) / CONVERT(numeric(3,3),SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1)))) END AS LookupRowsPerSecond , SUBSTRING(MESSAGE, (PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1)) - (PATINDEX(@CacheUsedSearchString, MESSAGE) + LEN(@CacheUsedString) + 1))) AS LookupBytesUsed ,CASE WHEN (CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))))= 0 THEN 0 ELSE CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1)) - (PATINDEX(@CacheUsedSearchString, MESSAGE) + LEN(@CacheUsedString) + 1)))) / CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))) END AS LookupBytesPerRow FROM [catalog].[operation_messages] WHERE message_source_type = @MessageSourceType AND MESSAGE LIKE @StartOfIDSearchString GO Note that you have to set some parameter values: @MessageSourceType [int] – represents the message source type value from the following results: Value     Description 10           Entry APIs, such as T-SQL and CLR Stored procedures 20           External process used to run package (ISServerExec.exe) 30           Package-level objects 40           Control Flow tasks 50           Control Flow containers 60           Data Flow task 70           Custom execution message Note: Taken from Reza Rad’s (excellent!) helper.MessageSourceType table found here. @StartOfIDString [VarChar(100)] – use this to uniquely identify the message field value you wish to parse. In this case, the string ‘The Lookup processed ‘ identifies all the Lookup Transformation messages I desire to parse. @ProcessingTimeString [VarChar(100)] – this parameter is message-specific. I use this parameter to specifically search the message field value for the beginning of the Lookup Processing Time value. For this execution, I use the string ‘The processing time was ‘. @CacheUsedString [VarChar(100)] – this parameter is also message-specific. I use this parameter to specifically search the message field value for the beginning of the Lookup Cache  Used value. It returns the memory used, in bytes. For this execution, I use the string ‘The cache used ‘. The other parameters are built from variations of the parameters listed above. The query parses the values into text. The string values are converted to numeric values for ratio calculations; LookupRowsPerSecond and LookupBytesPerRow. Since ratios involve division, CASE statements check for denominators that equal 0. Here are the results in an SSMS grid: This is not the only way to retrieve this information. And much of the code lends itself to conversion to functions. If there is interest, I will share the functions in an upcoming post. If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

  • Controlar Autentificaci&oacute;n Crystal Reports

    - by Jason Ulloa
    Para todos los que hemos trabajamos con Crystal Reports, no es un secreto que cuando tratamos de conectar nuestro reporte directamente a la base de datos, se nos viene encima el problema de autenticación. Es decir nuestro reporte al momento de iniciar la carga nos solicita autentificarnos en el servidor y sino lo hacemos, simplemente no veremos el reporte. Esto, además de ser tedioso para los usuarios se convierte en un problema de seguridad bastante grande, de ahí que en la mayoría de los casos se recomienda utilizar dataset. Sin embargo, para todos los que aún sabiendo esto no desean utilizar datasets, sino que, quieren conectar su crystal directamente veremos como implementar una pequeña clase que nos ayudará con esa tarea. Generalmente, cuando trabajamos con una aplicación web, nuestra cadena de conexión esta incluida en el web.config y también en muchas ocasiones contiene los datos como el usuario y password para acceder a la base de datos.  De esta cadena de conexión y estos datos es de los que nos ayudaremos para implementar la autentificación en el reporte. Generalmente, la cadena de conexión se vería así <connectionStrings> <remove name="LocalSqlServer"/> <add name="xxx" connectionString="Data Source=.\SqlExpress;Integrated Security=False;Initial Catalog=xxx;user id=myuser;password=mypass" providerName="System.Data.SqlClient"/> </connectionStrings>   Para nuestro ejemplo, nombraremos a nuestra clase CrystalRules (es solo algo que pensé de momento) 1. Primer Paso Creamos una variable de tipo SqlConnectionStringBuilder, a la cual le asignaremos la cadena de conexión que definimos en el web.config, y que luego utilizaremos para obtener los datos del usuario y el password para el crystal report. SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(ConfigurationManager.ConnectionStrings["xxx"].ConnectionString); 2. Implementación de propiedad Para ser más ordenados crearemos varias propiedad de tipo Privado, que se encargarán de recibir los datos de:   La Base de datos, el password, el usuario y el servidor private string _dbName; private string _serverName; private string _userID; private string _passWord;   private string dataBase { get { return _dbName; } set { _dbName = value; } }   private string serverName { get { return _serverName; } set { _serverName = value; } }   private string userName { get { return _userID; } set { _userID = value; } }   private string dataBasePassword { get { return _passWord; } set { _passWord = value; } } 3. Creación del Método para aplicar los datos de conexión Una vez que ya tenemos las propiedades, asignaremos a las variables los valores que se han recogido en el SqlConnectionStringBuilder. Y crearemos una variable de tipo ConnectionInfo para aplicar los datos de conexión. internal void ApplyInfo(ReportDocument _oRpt) { dataBase = builder.InitialCatalog; serverName = builder.DataSource; userName = builder.UserID; dataBasePassword = builder.Password;   Database oCRDb = _oRpt.Database; Tables oCRTables = oCRDb.Tables; //Table oCRTable = default(Table); TableLogOnInfo oCRTableLogonInfo = default(TableLogOnInfo); ConnectionInfo oCRConnectionInfo = new ConnectionInfo();   oCRConnectionInfo.DatabaseName = _dbName; oCRConnectionInfo.ServerName = _serverName; oCRConnectionInfo.UserID = _userID; oCRConnectionInfo.Password = _passWord;   foreach (Table oCRTable in oCRTables) { oCRTableLogonInfo = oCRTable.LogOnInfo; oCRTableLogonInfo.ConnectionInfo = oCRConnectionInfo; oCRTable.ApplyLogOnInfo(oCRTableLogonInfo);     }   }   4. Creación del report document y aplicación de la seguridad Una vez recogidos los datos y asignados, crearemos un elemento report document al cual le asignaremos el CrystalReportViewer y le aplicaremos los datos de acceso que obtuvimos anteriormente public void loadReport(string repName, CrystalReportViewer viewer) {   // attached our report to viewer and set database login. ReportDocument report = new ReportDocument(); report.Load(HttpContext.Current.Server.MapPath("~/Reports/" + repName)); ApplyInfo(report); viewer.ReportSource = report; } Al final, nuestra clase completa ser vería así public class CrystalRules { SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(ConfigurationManager.ConnectionStrings["Fatchoy.Data.Properties.Settings.FatchoyConnectionString"].ConnectionString);   private string _dbName; private string _serverName; private string _userID; private string _passWord;   private string dataBase { get { return _dbName; } set { _dbName = value; } }   private string serverName { get { return _serverName; } set { _serverName = value; } }   private string userName { get { return _userID; } set { _userID = value; } }   private string dataBasePassword { get { return _passWord; } set { _passWord = value; } }   internal void ApplyInfo(ReportDocument _oRpt) { dataBase = builder.InitialCatalog; serverName = builder.DataSource; userName = builder.UserID; dataBasePassword = builder.Password;   Database oCRDb = _oRpt.Database; Tables oCRTables = oCRDb.Tables; //Table oCRTable = default(Table); TableLogOnInfo oCRTableLogonInfo = default(TableLogOnInfo); ConnectionInfo oCRConnectionInfo = new ConnectionInfo();   oCRConnectionInfo.DatabaseName = _dbName; oCRConnectionInfo.ServerName = _serverName; oCRConnectionInfo.UserID = _userID; oCRConnectionInfo.Password = _passWord;   foreach (Table oCRTable in oCRTables) { oCRTableLogonInfo = oCRTable.LogOnInfo; oCRTableLogonInfo.ConnectionInfo = oCRConnectionInfo; oCRTable.ApplyLogOnInfo(oCRTableLogonInfo);     }   }   public void loadReport(string repName, CrystalReportViewer viewer) {   // attached our report to viewer and set database login. ReportDocument report = new ReportDocument(); report.Load(HttpContext.Current.Server.MapPath("~/Reports/" + repName)); ApplyInfo(report); viewer.ReportSource = report; }       #region instance   private static CrystalRules m_instance;   // Properties public static CrystalRules Instance { get { if (m_instance == null) { m_instance = new CrystalRules(); } return m_instance; } }   public DataDataContext m_DataContext { get { return DataDataContext.Instance; } }     #endregion instance   }   Si bien, la solución no es robusta y no es la mas segura. En casos de uso como una intranet y cuando estamos contra tiempo, podría ser de gran ayuda.

    Read the article

  • jQuery ajax doesn't seem to be reading HTML data in Chromium

    - by Mahesh
    I have an HTML (App) file that reads another HTML (data) file via jQuery.ajax(). It then finds specific tags in the data HTML file and uses text within the tags to display sort-of tool tips. Here's the App HTML file: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" lang="en-US" xml:lang="en-US"> <head> <title>Test</title> <style type="text/css"> <!--/* <![CDATA[ */ body { font-family : sans-serif; font-size : medium; margin-bottom : 5em; } a, a:hover, a:visited { text-decoration : none; color : #2222aa; } a:hover { background-color : #eeeeee; } #stat_preview { position : absolute; background : #ccc; border : thin solid #aaa; padding : 3px; font-family : monospace; height : 2.5em; } /* ]]> */--> </style> <script type="text/javascript" src="http://code.jquery.com/jquery-1.4.2.min.js"></script> <script type="text/javascript"> //<![CDATA[ $(document).ready(function() { $("#stat_preview").hide(); $(".cfg_lnk").mouseover(function () { lnk = $(this); $.ajax({ url: lnk.attr("href"), success: function (data) { console.log (data); $("#stat_preview").html("A heading<br>") .append($(".tool_tip_text", $(data)).slice(0,3).text()) .css('left', (lnk.offset().left + lnk.width() + 30)) .css('top', (lnk.offset().top + (lnk.height()/2))) .show(); } }); }).mouseout (function () { $("#stat_preview").hide(); }); }); //]]> </script> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> </head> <body> <h1>Test</h1> <ul> <li><a class="cfg_lnk" href="data.html">Sample data</a></li> </ul> <div id="stat_preview"></div> </body> </html> And here is the data HTML <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" lang="en-US" xml:lang="en-US"> <head> <title>Test</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> </head> <body> <h1>Test</h1> <table> <tr> <td class="tool_tip_text"> Some random value 1</td> <td class="tool_tip_text"> Some random value 2</td> <td class="tool_tip_text"> Some random value 3</td> <td class="tool_tip_text"> Some random value 4</td> <td class="tool_tip_text"> Some random value 5</td> </tr> <tr> <td class="tool_top_text"> Some random value 11</td> <td class="tool_top_text"> Some random value 21</td> <td class="tool_top_text"> Some random value 31</td> <td class="tool_top_text"> Some random value 41</td> <td class="tool_top_text"> Some random value 51</td> </tr> </table> </body> </html> This is working as intended in Firefox, but not in Chrome (Chromium 5.0.356.0). The console.log (data) displays empty string in Chromium's JavaScript console. Firebug in Firefox, however, displays the entire data HTML. Am I missing something? Any pointers?

    Read the article

  • Using Table-Valued Parameters in SQL Server

    - by Jesse
    I work with stored procedures in SQL Server pretty frequently and have often found myself with a need to pass in a list of values at run-time. Quite often this list contains a set of ids on which the stored procedure needs to operate the size and contents of which are not known at design time. In the past I’ve taken the collection of ids (which are usually integers), converted them to a string representation where each value is separated by a comma and passed that string into a VARCHAR parameter of a stored procedure. The body of the stored procedure would then need to parse that string into a table variable which could be easily consumed with set-based logic within the rest of the stored procedure. This approach works pretty well but the VARCHAR variable has always felt like an un-wanted “middle man” in this scenario. Of course, I could use a BULK INSERT operation to load the list of ids into a temporary table that the stored procedure could use, but that approach seems heavy-handed in situations where the list of values is usually going to contain only a few dozen values. Fortunately SQL Server 2008 introduced the concept of table-valued parameters which effectively eliminates the need for the clumsy middle man VARCHAR parameter. Example: Customer Transaction Summary Report Let’s say we have a report that can summarize the the transactions that we’ve conducted with customers over a period of time. The report returns a pretty simple dataset containing one row per customer with some key metrics about how much business that customer has conducted over the date range for which the report is being run. Sometimes the report is run for a single customer, sometimes it’s run for all customers, and sometimes it’s run for a handful of customers (i.e. a salesman runs it for the customers that fall into his sales territory). This report can be invoked from a website on-demand, or it can be scheduled for periodic delivery to certain users via SQL Server Reporting Services. Because the report can be created from different places and the query to generate the report is complex it’s been packed into a stored procedure that accepts three parameters: @startDate – The beginning of the date range for which the report should be run. @endDate – The end of the date range for which the report should be run. @customerIds – The customer Ids for which the report should be run. Obviously, the @startDate and @endDate parameters are DATETIME variables. The @customerIds parameter, however, needs to contain a list of the identity values (primary key) from the Customers table representing the customers that were selected for this particular run of the report. In prior versions of SQL Server we might have made this parameter a VARCHAR variable, but with SQL Server 2008 we can make it into a table-valued parameter. Defining And Using The Table Type In order to use a table-valued parameter, we first need to tell SQL Server about what the table will look like. We do this by creating a user defined type. For the purposes of this stored procedure we need a very simple type to model a table variable with a single integer column. We can create a generic type called ‘IntegerListTableType’ like this: CREATE TYPE IntegerListTableType AS TABLE (Value INT NOT NULL) Once defined, we can use this new type to define the @customerIds parameter in the signature of our stored procedure. The parameter list for the stored procedure definition might look like: 1: CREATE PROCEDURE dbo.rpt_CustomerTransactionSummary 2: @starDate datetime, 3: @endDate datetime, 4: @customerIds IntegerListTableTableType READONLY   Note the ‘READONLY’ statement following the declaration of the @customerIds parameter. SQL Server requires any table-valued parameter be marked as ‘READONLY’ and no DML (INSERT/UPDATE/DELETE) statements can be performed on a table-valued parameter within the routine in which it’s used. Aside from the DML restriction, however, you can do pretty much anything with a table-valued parameter as you could with a normal TABLE variable. With the user defined type and stored procedure defined as above, we could invoke like this: 1: DECLARE @cusomterIdList IntegerListTableType 2: INSERT @customerIdList VALUES (1) 3: INSERT @customerIdList VALUES (2) 4: INSERT @customerIdList VALUES (3) 5:  6: EXEC dbo.rpt_CustomerTransationSummary 7: @startDate = '2012-05-01', 8: @endDate = '2012-06-01' 9: @customerIds = @customerIdList   Note that we can simply declare a variable of type ‘IntegerListTableType’ just like any other normal variable and insert values into it just like a TABLE variable. We could also populate the variable with a SELECT … INTO or INSERT … SELECT statement if desired. Using The Table-Valued Parameter With ADO .NET Invoking a stored procedure with a table-valued parameter from ADO .NET is as simple as building a DataTable and passing it in as the Value of a SqlParameter. Here’s some example code for how we would construct the SqlParameter for the @customerIds parameter in our stored procedure: 1: var customerIdsParameter = new SqlParameter(); 2: customerIdParameter.Direction = ParameterDirection.Input; 3: customerIdParameter.TypeName = "IntegerListTableType"; 4: customerIdParameter.Value = selectedCustomerIds.ToIntegerListDataTable("Value");   All we’re doing here is new’ing up an instance of SqlParameter, setting the pamameters direction, specifying the name of the User Defined Type that this parameter uses, and setting its value. We’re assuming here that we have an IEnumerable<int> variable called ‘selectedCustomerIds’ containing all of the customer Ids for which the report should be run. The ‘ToIntegerListDataTable’ method is an extension method of the IEnumerable<int> type that looks like this: 1: public static DataTable ToIntegerListDataTable(this IEnumerable<int> intValues, string columnName) 2: { 3: var intergerListDataTable = new DataTable(); 4: intergerListDataTable.Columns.Add(columnName); 5: foreach(var intValue in intValues) 6: { 7: var nextRow = intergerListDataTable.NewRow(); 8: nextRow[columnName] = intValue; 9: intergerListDataTable.Rows.Add(nextRow); 10: } 11:  12: return intergerListDataTable; 13: }   Since the ‘IntegerListTableType’ has a single int column called ‘Value’, we pass that in for the ‘columnName’ parameter to the extension method. The method creates a new single-columned DataTable using the provided column name then iterates over the items in the IEnumerable<int> instance adding one row for each value. We can then use this SqlParameter instance when invoking the stored procedure just like we would use any other parameter. Advanced Functionality Using passing a list of integers into a stored procedure is a very simple usage scenario for the table-valued parameters feature, but I’ve found that it covers the majority of situations where I’ve needed to pass a collection of data for use in a query at run-time. I should note that BULK INSERT feature still makes sense for passing large amounts of data to SQL Server for processing. MSDN seems to suggest that 1000 rows of data is the tipping point where the overhead of a BULK INSERT operation can pay dividends. I should also note here that table-valued parameters can be used to deal with more complex data structures than single-columned tables of integers. A User Defined Type that backs a table-valued parameter can use things like identities and computed columns. That said, using some of these more advanced features might require the use the SqlDataRecord and SqlMetaData classes instead of a simple DataTable. Erland Sommarskog has a great article on his website that describes when and how to use these classes for table-valued parameters. What About Reporting Services? Earlier in the post I referenced the fact that our example stored procedure would be called from both a web application and a SQL Server Reporting Services report. Unfortunately, using table-valued parameters from SSRS reports can be a bit tricky and warrants its own blog post which I’ll be putting together and posting sometime in the near future.

    Read the article

  • Another design-related C++ question

    - by Kotti
    Hi! I am trying to find some optimal solutions in C++ coding patterns, and this is one of my game engine - related questions. Take a look at the game object declaration (I removed almost everything, that has no connection with the question). // Abstract representation of a game object class Object : public Entity, IRenderable, ISerializable { // Object parameters // Other not really important stuff public: // @note Rendering template will never change while // the object 'lives' Object(RenderTemplate& render_template, /* params */) : /*...*/ { } private: // Object rendering template RenderTemplate render_template; public: /** * Default object render method * Draws rendering template data at (X, Y) with (Width, Height) dimensions * * @note If no appropriate rendering method overload is specified * for any derived class, this method is called * * @param Backend & b * @return void * @see */ virtual void Render(Backend& backend) const { // Render sprite from object's // rendering template structure backend.RenderFromTemplate( render_template, x, y, width, height ); } }; Here is also the IRenderable interface declaration: // Objects that can be rendered interface IRenderable { /** * Abstract method to render current object * * @param Backend & b * @return void * @see */ virtual void Render(Backend& b) const = 0; } and a sample of a real object that is derived from Object (with severe simplifications :) // Ball object class Ball : public Object { // Ball params public: virtual void Render(Backend& b) const { b.RenderEllipse(/*params*/); } }; What I wanted to get is the ability to have some sort of standard function, that would draw sprite for an object (this is Object::Render) if there is no appropriate overload. So, one can have objects without Render(...) method, and if you try to render them, this default sprite-rendering stuff is invoked. And, one can have specialized objects, that define their own way of being rendered. I think, this way of doing things is quite good, but what I can't figure out - is there any way to split the objects' "normal" methods (like Resize(...) or Rotate(...)) implementation from their rendering implementation? Because if everything is done the way described earlier, a common .cpp file, that implements any type of object would generally mix the Resize(...), etc methods implementation and this virtual Render(...) method and this seems to be a mess. I actually want to have rendering procedures for the objects in one place and their "logic implementation" - in another. Is there a way this can be done (maybe alternative pattern or trick or hint) or this is where all this polymorphic and virtual stuff sucks in terms of code placement?

    Read the article

  • How to best design a date/geographic proximity query on GAE?

    - by Dane
    Hi all, I'm building a directory for finding athletic tournaments on GAE with web2py and a Flex front end. The user selects a location, a radius, and a maximum date from a set of choices. I have a basic version of this query implemented, but it's inefficient and slow. One way I know I can improve it is by condensing the many individual queries I'm using to assemble the objects into bulk queries. I just learned that was possible. But I'm also thinking about a more extensive redesign that utilizes memcache. The main problem is that I can't query the datastore by location because GAE won't allow multiple numerical comparison statements (<,<=,=,) in one query. I'm already using one for date, and I'd need TWO to check both latitude and longitude, so it's a no go. Currently, my algorithm looks like this: 1.) Query by date and select 2.) Use destination function from geopy's distance module to find the max and min latitude and longitudes for supplied distance 3.) Loop through results and remove all with lat/lng outside max/min 4.) Loop through again and use distance function to check exact distance, because step 2 will include some areas outside the radius. Remove results outside supplied distance (is this 2/3/4 combination inefficent?) 5.) Assemble many-to-many lists and attach to objects (this is where I need to switch to bulk operations) 6.) Return to client Here's my plan for using memcache.. let me know if I'm way out in left field on this as I have no prior experience with memcache or server caching in general. -Keep a list in the cache filled with "geo objects" that represent all my data. These have five properties: latitude, longitude, event_id, event_type (in anticipation of expanding beyond tournaments), and start_date. This list will be sorted by date. -Also keep a dict of pointers in the cache which represent the start and end indices in the cache for all the date ranges my app uses (next week, 2 weeks, month, 3 months, 6 months, year, 2 years). -Have a scheduled task that updates the pointers daily at 12am. -Add new inserts to the cache as well as the datastore; update pointers. Using this design, the algorithm would now look like: 1.) Use pointers to slice off appropriate chunk of list based on supplied date. 2-4.) Same as above algorithm, except with geo objects 5.) Use bulk operation to select full tournaments using remaining geo objects' event_ids 6.) Assemble many-to-manys 7.) Return to client Thoughts on this approach? Many thanks for reading and any advice you can give. -Dane

    Read the article

  • linear interpolation on 8bit microcontroller

    - by JB
    I need to do a linear interpolation over time between two values on an 8 bit PIC microcontroller (Specifically 16F627A but that shouldn't matter) using PIC assembly language. Although I'm looking for an algorithm here as much as actual code. I need to take an 8 bit starting value, an 8 bit ending value and a position between the two (Currently represented as an 8 bit number 0-255 where 0 means the output should be the starting value and 255 means it should be the final value but that can change if there is a better way to represent this) and calculate the interpolated value. Now PIC doesn't have a divide instruction so I could code up a general purpose divide routine and effectivly calculate (B-A)/(x/255)+A at each step but I feel there is probably a much better way to do this on a microcontroller than the way I'd do it on a PC in c++ Has anyone got any suggestions for implementing this efficiently on this hardware?

    Read the article

  • Manipulate COBOL data structure

    - by Morewinder
    Hello. I would like informations to manipulate tables. I encounter few problem with a piece of cobol code like below: 01 TABLE-1. 05 STRUCT-1 OCCURS 25 TIMES. 10 VALUE-1 PIC AAA. 10 VALUE-2 PIC 9(5)V999. 05 NUMBER-OF-OCCURS PIC 99. How do you update values? (update a VALUE-2 when you know a VALUE-1) How look up a value and add new one? Thanks a lot!

    Read the article

  • ComboBox wpf not item not being selected

    - by Greg R
    I am trying to bind a combo box to a list of objects, and it works great, besides the selected value, am I missing somethign? <ComboBox ItemsSource="{Binding OrderInfoVm.AllCountries}" SelectedValuePath="country_code" DisplayMemberPath="country_name" SelectedValue="{Binding OrderInfoVm.BillingCountry}" /> Basically I want to bind value to country codes and set the selected value to the country code bound to OrderInfoVm.BillingCountry (which implements INotifyPropertyChanged) Initially when the control loads selected value is empty, but on click BillingCountry is populated. Selected value does not seem to change. How can I remedy that?

    Read the article

  • FairScheduling Conventions in Hadoop

    - by dan.mcclary
    While scheduling and resource allocation control has been present in Hadoop since 0.20, a lot of people haven't discovered or utilized it in their initial investigations of the Hadoop ecosystem. We could chalk this up to many things: Organizations are still determining what their dataflow and analysis workloads will comprise Small deployments under tests aren't likely to show the signs of strains that would send someone looking for resource allocation options The default scheduling options -- the FairScheduler and the CapacityScheduler -- are not placed in the most prominent position within the Hadoop documentation. However, for production deployments, it's wise to start with at least the foundations of scheduling in place so that you can tune the cluster as workloads emerge. To do that, we have to ask ourselves something about what the off-the-rack scheduling options are. We have some choices: The FairScheduler, which will work to ensure resource allocations are enforced on a per-job basis. The CapacityScheduler, which will ensure resource allocations are enforced on a per-queue basis. Writing your own implementation of the abstract class org.apache.hadoop.mapred.job.TaskScheduler is an option, but usually overkill. If you're going to have several concurrent users and leverage the more interactive aspects of the Hadoop environment (e.g. Pig and Hive scripting), the FairScheduler is definitely the way to go. In particular, we can do user-specific pools so that default users get their fair share, and specific users are given the resources their workloads require. To enable fair scheduling, we're going to need to do a couple of things. First, we need to tell the JobTracker that we want to use scheduling and where we're going to be defining our allocations. We do this by adding the following to the mapred-site.xml file in HADOOP_HOME/conf: <property> <name>mapred.jobtracker.taskScheduler</name> <value>org.apache.hadoop.mapred.FairScheduler</value> </property> <property> <name>mapred.fairscheduler.allocation.file</name> <value>/path/to/allocations.xml</value> </property> <property> <name>mapred.fairscheduler.poolnameproperty</name> <value>pool.name</value> </property> <property> <name>pool.name</name> <value>${user.name}</name> </property> What we've done here is simply tell the JobTracker that we'd like to task scheduling to use the FairScheduler class rather than a single FIFO queue. Moreover, we're going to be defining our resource pools and allocations in a file called allocations.xml For reference, the allocation file is read every 15s or so, which allows for tuning allocations without having to take down the JobTracker. Our allocation file is now going to look a little like this <?xml version="1.0"?> <allocations> <pool name="dan"> <minMaps>5</minMaps> <minReduces>5</minReduces> <maxMaps>25</maxMaps> <maxReduces>25</maxReduces> <minSharePreemptionTimeout>300</minSharePreemptionTimeout> </pool> <mapreduce.job.user.name="dan"> <maxRunningJobs>6</maxRunningJobs> </user> <userMaxJobsDefault>3</userMaxJobsDefault> <fairSharePreemptionTimeout>600</fairSharePreemptionTimeout> </allocations> In this case, I've explicitly set my username to have upper and lower bounds on the maps and reduces, and allotted myself double the number of running jobs. Now, if I run hive or pig jobs from either the console or via the Hue web interface, I'll be treated "fairly" by the JobTracker. There's a lot more tweaking that can be done to the allocations file, so it's best to dig down into the description and start trying out allocations that might fit your workload.

    Read the article

  • I've Heard Global Variables Are Bad, What Alternative Solution Should I Use?

    - by Jay
    I've read all over the place that global variables are bad and alternatives should be used. In Javascript specifically, what solution should I choose. I'm thinking of a function, that when fed two arguments (function globalVariables(Variable,Value)) looks if Variable exists in a local array and if it does set it's value to Value, else, Variable and Value are appended. If the function is called without arguments (function globalVariables()) it returns the array. Perhaps if the function is fired with just one argument (function globalVariables(Variable)) it returns the value of Variable in the array. What do you think? I'd like to hear your alternative solutions and arguments for using global variables.

    Read the article

  • Event Logging in LINQ C# .NET

    The first thing you'll want to do before using this code is to create a table in your database called TableHistory: CREATE TABLE [dbo].[TableHistory] (     [TableHistoryID] [int] IDENTITY NOT NULL ,     [TableName] [varchar] (50) NOT NULL ,     [Key1] [varchar] (50) NOT NULL ,     [Key2] [varchar] (50) NULL ,     [Key3] [varchar] (50) NULL ,     [Key4] [varchar] (50) NULL ,     [Key5] [varchar] (50) NULL ,     [Key6] [varchar] (50)NULL ,     [ActionType] [varchar] (50) NULL ,     [Property] [varchar] (50) NULL ,     [OldValue] [varchar] (8000) NULL ,     [NewValue] [varchar] (8000) NULL ,     [ActionUserName] [varchar] (50) NOT NULL ,     [ActionDateTime] [datetime] NOT NULL ) Once you have created the table, you'll need to add it to your custom LINQ class (which I will refer to as DboDataContext), thus creating the TableHistory class. Then, you'll need to add the History.cs file to your project. You'll also want to add the following code to your project to get the system date: public partial class DboDataContext{ [Function(Name = "GetDate", IsComposable = true)] public DateTime GetSystemDate() { MethodInfo mi = MethodBase.GetCurrentMethod() as MethodInfo; return (DateTime)this.ExecuteMethodCall(this, mi, new object[] { }).ReturnValue; }}private static Dictionary<type,> _cachedIL = new Dictionary<type,>();public static T CloneObjectWithIL<t>(T myObject){ Delegate myExec = null; if (!_cachedIL.TryGetValue(typeof(T), out myExec)) { // Create ILGenerator DynamicMethod dymMethod = new DynamicMethod("DoClone", typeof(T), new Type[] { typeof(T) }, true); ConstructorInfo cInfo = myObject.GetType().GetConstructor(new Type[] { }); ILGenerator generator = dymMethod.GetILGenerator(); LocalBuilder lbf = generator.DeclareLocal(typeof(T)); //lbf.SetLocalSymInfo("_temp"); generator.Emit(OpCodes.Newobj, cInfo); generator.Emit(OpCodes.Stloc_0); foreach (FieldInfo field in myObject.GetType().GetFields( System.Reflection.BindingFlags.Instance | System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.NonPublic)) { // Load the new object on the eval stack... (currently 1 item on eval stack) generator.Emit(OpCodes.Ldloc_0); // Load initial object (parameter) (currently 2 items on eval stack) generator.Emit(OpCodes.Ldarg_0); // Replace value by field value (still currently 2 items on eval stack) generator.Emit(OpCodes.Ldfld, field); // Store the value of the top on the eval stack into // the object underneath that value on the value stack. // (0 items on eval stack) generator.Emit(OpCodes.Stfld, field); } // Load new constructed obj on eval stack -> 1 item on stack generator.Emit(OpCodes.Ldloc_0); // Return constructed object. --> 0 items on stack generator.Emit(OpCodes.Ret); myExec = dymMethod.CreateDelegate(typeof(Func<t,>)); _cachedIL.Add(typeof(T), myExec); } return ((Func<t,>)myExec)(myObject);}I got both of the above methods off of the net somewhere (maybe even from CodeProject), but it's been long enough that I can't recall where I got them.Explanation of the History ClassThe History class records changes by creating a TableHistory record, inserting the values for the primary key for the table being modified into the Key1, Key2, ..., Key6 columns (if you have more than 6 values that make up a primary key on any table, you'll want to modify this), setting the type of change being made in the ActionType column (INSERT, UPDATE, or DELETE), old value and new value if it happens to be an update action, and the date and Windows identity of the user who made the change.Let's examine what happens when a call is made to the RecordLinqInsert method:public static void RecordLinqInsert(DboDataContext dbo, IIdentity user, object obj){ TableHistory hist = NewHistoryRecord(obj); hist.ActionType = "INSERT"; hist.ActionUserName = user.Name; hist.ActionDateTime = dbo.GetSystemDate(); dbo.TableHistories.InsertOnSubmit(hist);}private static TableHistory NewHistoryRecord(object obj){ TableHistory hist = new TableHistory(); Type type = obj.GetType(); PropertyInfo[] keys; if (historyRecordExceptions.ContainsKey(type)) { keys = historyRecordExceptions[type].ToArray(); } else { keys = type.GetProperties().Where(o => AttrIsPrimaryKey(o)).ToArray(); } if (keys.Length > KeyMax) throw new HistoryException("object has more than " + KeyMax.ToString() + " keys."); for (int i = 1; i <= keys.Length; i++) { typeof(TableHistory) .GetProperty("Key" + i.ToString()) .SetValue(hist, keys[i - 1].GetValue(obj, null).ToString(), null); } hist.TableName = type.Name; return hist;}protected static bool AttrIsPrimaryKey(PropertyInfo pi){ var attrs = from attr in pi.GetCustomAttributes(typeof(ColumnAttribute), true) where ((ColumnAttribute)attr).IsPrimaryKey select attr; if (attrs != null && attrs.Count() > 0) return true; else return false;}RecordLinqInsert takes as input a data context which it will use to write to the database, the user, and the LINQ object to be recorded (a single object, for instance, a Customer or Order object if you're using AdventureWorks). It then calls the NewHistoryRecord method, which uses LINQ to Objects in conjunction with the AttrIsPrimaryKey method to pull all the primary key properties, set the Key1-KeyN properties of the TableHistory object, and return the new TableHistory object. The code would be called in an application, like so: Continue span.fullpost {display:none;}

    Read the article

  • collect2: ld returned 1 exit status error in Xcode

    - by user573949
    Hello, Im getting the error Command /Developer/usr/bin/gcc-4.2 failed with exit code 1 and when the full log is opened, the error is more accurately listed as: collect2: ld returned 1 exit status from this simple Cocoa script: #import "Controller.h" @implementation Controller int skillcheck (int level, int modifer, int difficulty) { if (level + modifer >= difficulty) { return 1; } if (level + modifer <= difficulty) { return 0; } } int main () { skillcheck(10, 2, 10); } @end the .h file is this: // // Controller.h // // Created by Duo Oratar on 15/01/2011. // Copyright 2011 __MyCompanyName__. All rights reserved. // #import <Cocoa/Cocoa.h> @interface Controller : NSObject { int skillcheck; int contestcheck; } @end and no line was specified that the error came from, does anyone know what the source of this error is, and more importantly, how to fix it? EDIT: I removed the class so now I have this: // // Controller.m // // Created by Duo Oratar on 15/01/2011. // Copyright 2011 __MyCompanyName__. All rights reserved. // #import "Controller.h" int skillcheck (int level, int modifer, int difficulty) { if (level + modifer >= difficulty) { return 1; } if (level + modifer <= difficulty) { return 0; } } int main () { skillcheck(10, 2, 10); } and for the .h file: // // Controller.h // // Created by Duo Oratar on 15/01/2011. // Copyright 2011 __MyCompanyName__. All rights reserved. // #import <Cocoa/Cocoa.h> and the log says: (thanks to the guy who said how to open it) Ld build/Debug/Calculator.app/Contents/MacOS/Calculator normal x86_64 cd /Users/kids/Desktop/Calculator setenv MACOSX_DEPLOYMENT_TARGET 10.6 /Developer/usr/bin/gcc-4.2 -arch x86_64 -isysroot /Developer/SDKs/MacOSX10.6.sdk -L/Users/kids/Desktop/Calculator/build/Debug -F/Users/kids/Desktop/Calculator/build/Debug -filelist /Users/kids/Desktop/Calculator/build/Calculator.build/Debug/Calculator.build/Objects-normal/x86_64/Calculator.LinkFileList -mmacosx-version-min=10.6 -framework Cocoa -o /Users/kids/Desktop/Calculator/build/Debug/Calculator.app/Contents/MacOS/Calculator ld: duplicate symbol _main in /Users/kids/Desktop/Calculator/build/Calculator.build/Debug/Calculator.build/Objects-normal/x86_64/Controller.o and /Users/kids/Desktop/Calculator/build/Calculator.build/Debug/Calculator.build/Objects-normal/x86_64/main.o collect2: ld returned 1 exit status Command /Developer/usr/bin/gcc-4.2 failed with exit code 1 ld: duplicate symbol _main in /Users/kids/Desktop/Calculator/build/Calculator.build/Debug/Calculator.build/Objects-normal/x86_64/Controller.o and /Users/kids/Desktop/Calculator/build/Calculator.build/Debug/Calculator.build/Objects-normal/x86_64/main.o Command /Developer/usr/bin/gcc-4.2 failed with exit code 1

    Read the article

  • HTML5 Input type=date Formatting Issues

    - by Rick Strahl
    One of the nice features in HTML5 is the abililty to specify a specific input type for HTML text input boxes. There a host of very useful input types available including email, number, date, datetime, month, number, range, search, tel, time, url and week. For a more complete list you can check out the MDN reference. Date input types also support automatic validation which can be useful in some scenarios but maybe can get in the way at other times. One of the more common input types, and one that can most benefit of a custom UI for selection is of course date input. Almost every application could use a decent date representation and HTML5's date input type seems to push into the right direction. It'd be nice if you could just say:<form action="DateTest.html"> <label for="FromDate">Enter a Date:</label> <input type="date" id="FromDate" name="FromDate" value="11/08/2012" class="date" /> <hr /> <input type="submit" id="btnSubmit" name="btnSubmit" value="Save Date" class="smallbutton" /> </form> but if you'd expect to just work, you're likely to be pretty disappointed. Problem #1: Browser Support For starters there's browser support. Out of the major browsers only the latest versions of WebKit and Opera based browsers seem to support date input. Neither FireFox, nor any version of Internet Explorer (including the new touch enabled IE10 in Windows RT) support input type=date. Browser support is an issue, but it would be OK if it wasn't for problem #2. Problem #2: Date Formatting If you look at my date input from before:<input type="date" id="FromDate" name="FromDate" value="11/08/2012" class="date" /> You can see that my date is formatted in local date format (ie. en-us). Now when I run this sadly the form that comes up in Chrome (and also iOS mobile browsers) comes up like this: Chrome isn't recognizing my local date string. Instead it's expecting my date format to be provided in ISO 8601 format which is: 2012-11-08 So if I change the date input field to:<input type="date" id="FromDate" name="FromDate" value="2012-10-08" class="date" /> I correctly get the date field filled in: Also when I pick a date with the DatePicker the date value is also returned is also set to the ISO date format. Yet notice how the date is still formatted to the local date time format (ie. en-US format). So if I pick a new date: and then save, the value field is set back to: 2012-11-15 using the ISO format. The same is true for Opera and iOS browsers and I suspect any other WebKit style browser and their date pickers. So to summarize input type=date: Expects ISO 8601 format dates to display intial values Sets selected date values to ISO 8601 Now what? This would sort of make sense, if all browsers supported input type=date. It'd be easy because you could just format dates appropriately when you set the date value into the control by applying the appropriate culture formatting (ie. .ToString("yyyy-MM-dd") ). .NET is actually smart enough to pick up the date on the other end for modelbinding when ISO 8601 is used. For other environments this might be a bit more tricky. input type=date is clearly the way to go forward. Date controls implemented in HTML are going the way of the dodo, given the intricacies of mobile platforms and scaling for both desktop and mobile. I've been using jQuery UI Datepicker for ages but once going to mobile, that's no longer an option as the control doesn't scale down well for mobile apps (at least not without major re-styling). It also makes a lot of sense for the browser to provide this functionality - creating a consistent date input experience across apps only makes sense, which is why I find it baffling that neither FireFox nor IE 10 deign it necessary to support date input natively. The problem is that a large number of even the latest and greatest browsers don't support this. So now you're stuck with not knowing what date format you have to serve since neither the local format, nor the ISO format works in all cases. For my current app I just broke down and used the ISO format and so I'll live with the non-local date format. <input type="date" id="ToDate" name="ToDate" value="2012-11-08" class="date"/> Here's what this looks like on Chrome: Here's what it looks like on my iPhone: Both Chrome and the phone do this the way it should be. For the phone especially this demonstrates why we'd want this - the built-in date picker there certainly beats manually trying to edit the date using finger gymnastics, and it's one of the easiest ways to pick a date I can think of (ie. easier to use than your typical date picker). Finally here's what the date looks like in FireFox: Certainly this is not the ideal date format, but it's clear enough I suppose. If users enter a date in local US format and that works as well (but won't work for other locales). It'll have to do. Over time one can only hope that other browsers will finally decide to implement this functionality natively to provide a unique experience. Until then, incomplete solutions it is. Related Posts Html 5 Input Types - How useful is this really going to be?© Rick Strahl, West Wind Technologies, 2005-2012Posted in HTML5  HTML   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >