Search Results

Search found 16054 results on 643 pages for 'reference architecture'.

Page 528/643 | < Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >

  • String to array or Array to string tips on formats, etc

    - by user316841
    hi, first of all thanks for taking your time! I'm a junior Dev, working with PHP + mysql. My issue: I'm saving data from a form to my database. From this form, there's only need to save the contacts: Name, phone number, address. But, it would be nice to have a small reference to the user answers. Let's say for each question we've got a value betwee 1 and 4. Since there's no need to create a table just for it, because what's needed is just the personal contacts. I'm thinking of recording each question/answer, as a letter and its correspondent value. Example (A2, B1, C5, D3, etc). My question is: Is there a format I could afterwards, handle easily ? Convert to array (string to array) in case the client change ideas, and ask this data, placed in table columns ? Just to prevent this situation! Example, From (A2, B1, C5 ) to array( "A" = "1", "B" = "1", "C" = "5" ) For now I guess, Regex is the answer, but it's allways hard to figure it out and I'm allways getting in troubles =) Thanks!

    Read the article

  • ASP.NET MVC and NHibernate coupling

    - by Ben
    I have just started learning NHibernate. Over the past few months I have been using IoC / DI (structuremap) and the repository pattern and it has made my applications much more loosely coupled and easier to test. When switching my persistence layer to NHibernate I decided to stick with my repositories. Currently I am creating a new session on each method call but of course this means that I can not benefit from lazy loading. Therefore I wish to implement session-per-request but in doing so this will make my web project dependent on NHibernate (perhaps this is not such a bad thing?). I was planning to inject ISession into my repositories and create and dispose sessions on beginrequest/endrequest events (see http://ayende.com/Blog/archive/2009/08/05/do-you-need-a-framework.aspx) Is this a good approach? Presumably I cannot use session-per-request without having a reference to NHibernate in my web project? Having the web project dependent on NHibernate prompts my next (few) questions - why even bother with the repository? Since my web app is calling services that talk to the repositories, why not ditch the repositories and just add my NHibernate persistance code inside the services? And finally, is there really any need to split out into so many projects. Is a web project and an infrastructure project sufficient? I realise that I have veered off a bit from my original question but it seems that everyone seems to have their own opinion on these topics. Some people use the repository pattern with NHibernate, some don't. Some people stick their mapping files with the related classes, others have a separate project for this. Many thanks, Ben

    Read the article

  • Why does my DataTemplate break the WPF designer?

    - by PRINCESS FLUFF
    Why does the DataTemplate line break the WPF designer in Visual Studio 2008? The program compiles and runs properly. The DataTemplate is applied as it should. However the entire DataTemplate block of code is underlined in red, and when I simply "build" the program without running, I get the error "Type reference cannot find public type named 'Character'" How come it can't find it in the designer yet the program applies the template properly? <UserControl x:Class="WPF_Tests.Tests.TwoCollecViews.TwoViews" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:DetailsPane="clr-namespace:WPF_Tests.Tests.DetailsPane" > <UserControl.Resources> <DataTemplate DataType="{x:Type DetailsPane:Character}"> <StackPanel Orientation="Horizontal"> <TextBlock Text="{Binding Path=Name}"></TextBlock> </StackPanel> </DataTemplate> </UserControl.Resources> <Grid> <ListBox ItemsSource="{Binding Path=Characters}" /> </Grid> </UserControl> EDIT: I am being told that this may be a bug in Visual Studio 2008, as it worked correctly in 2010. You can download the code here: http://www.mediafire.com/?z1myytvwm4n - The Test/TwoCollec xaml file's designer will break with this code.

    Read the article

  • Bash and regex problem : check for tokens entered into a Coke vending machine

    - by Michael Mao
    Hi all: Here is a "challenge question" I've got from Linux system programming lecture. Any of the following strings will give you a Coke if you kick: L = { aaaa, aab, aba, baa, bb, aaaa"a", aaaa"b", aab"a", … ab"b"a, ba"b"a, ab"bbbbbb"a, ... } The letters shown in wrapped double quotes indicate coins that would have fallen through (but those strings are still part of the language in this example). Exercise (a bit hard) show this is the language of a regular expression And this is what I've got so far : #!/usr/bin/bash echo "A bottle of Coke costs you 40 cents" echo -e "Please enter tokens (a = 10 cents, b = 20 cents) in a sequence like 'abba' :\c" read tokens #if [ $tokens = aaaa ]||[ $tokens = aab ]||[ $tokens = bb ] #then # echo "Good! now a coke is yours!" #else echo "Thanks for your money, byebye!" if [[ $token =~ 'aaaa|aab|bb' ]] then echo "Good! now a coke is yours!" else echo "Thanks for your money, byebye!" fi Sadly it doesn't work... always outputs "Thanks for your money, byebye!" I believe something is wrong with syntax... We didn't provided with any good reference book and the only instruction from the professor was to consult "anything you find useful online" and "research the problem yourself" :( I know how could I do it in any programming language such as Java, but get it done with bash script + regex seems not "a bit hard" but in fact "too hard" for anyone with little knowledge on something advanced as "lookahead"(is this the terminology ?) I don't know if there is a way to express the following concept in the language of regex: Valid entry would consist of exactly one of the three components : aaaa, aab and bb, regardless of order, followed by an arbitrary sequence of a or b's So this is what is should be like : (a{4}Ua{2}bUb{2})(aUb)* where the content in first braces is order irrelevant. Thanks a lot in advance for any hints and/or tips :)

    Read the article

  • Does JAXWS client make difference between an empty collection and a null collection value as returne

    - by snowflake
    Since JAX-WS rely on JAXB, and since I observed the code that unpack the XML bean in JAX-B Reference Implementation, I guess the difference is not made and that a JAXWS client always return an empty collection, even the webservice result was a null element: public T startPacking(BeanT bean, Accessor<BeanT, T> acc) throws AccessorException { T collection = acc.get(bean); if(collection==null) { collection = ClassFactory.create(implClass); if(!acc.isAdapted()) acc.set(bean,collection); } collection.clear(); return collection; } I agree that for best interoperability service contracts should be non ambiguous and avoid such differences, but it seems that the JAX-WS service I'm invoking (hosted on a Jboss server with Jbossws implementation) is returning as expected either null either empty collection (tested with SoapUI). I used for my test code generated by wsimport. Return element is defined as: @XmlElement(name = "return", nillable = true) protected List<String> _return; I even tested to change the Response class getReturn method from : public List<String> getReturn() { if (_return == null) { _return = new ArrayList<String>(); } return this._return; } to public List<String> getReturn() { return this._return; } but without success. Any helpful information/comment regarding this problem is welcome !

    Read the article

  • @Autowire strange problem

    - by Javi
    Hello, I have a strange behaviour when autowiring I have a similar code like this one, and it works @Controller public class Class1 { @Autowired private Class2 object2; ... } @Service @Transactional public class Class2{ ... } The problem is that I need that the Class2 implements an interface so I've only changed the Class2 so it's now like: @Controller public class Class1 { @Autowired private Class2 object2; ... } @Service @Transactional public class Class2 implements IServiceReference<Class3, Long>{ ... } public interface IServiceReference<T, PK extends Serializable> { public T reference(PK id); } with this code I get a org.springframework.beans.factory.NoSuchBeanDefinitionException: No matching bean of type for Class2. It seems that @ Transitional annotation is not compatible with the interface because if I remove the @Transitional annotation or the "implements IServiceReference" the problem disapears and the bean is injected (though I need to have both in this class). It also happens if I put the annotation @Transitional in the methods instead of in the Class. I use Spring 3.0.2 if this helps. Is not compatible the interface with the transactional method? May it be a Spring bug? Thanks

    Read the article

  • VS 2008 designer and usercontrol.

    - by Ram
    Hello, I have created a custom data grid control. I dragged it on windows form and set its properties like column and all & ran the project. It built successfully and I am able to view the grid control on the form. Now if i try to view that form in designer, I am getting following error.. Object reference not set to an instance of an object. Instances of this error (1) 1. Hide Call Stack at Microsoft.VisualStudio.Design.Serialization.CodeDom.XML.CodeDomXmlProcessor.GetMemberTargetObject(XmlElementData xmlElementData, String& member) at Microsoft.VisualStudio.Design.Serialization.CodeDom.XML.CodeDomXmlProcessor.CreateAssignStatement(XmlElementData xmlElement) at Microsoft.VisualStudio.Design.Serialization.CodeDom.XML.CodeDomXmlProcessor.XmlElementData.get_CodeDomElement() at Microsoft.VisualStudio.Design.Serialization.CodeDom.XML.CodeDomXmlProcessor.EndElement(String prefix, String name, String urn) at Microsoft.VisualStudio.Design.Serialization.CodeDom.XML.CodeDomXmlProcessor.Parse(XmlReader reader) at Microsoft.VisualStudio.Design.Serialization.CodeDom.XML.CodeDomXmlProcessor.ParseXml(String xmlStream, CodeStatementCollection statementCollection, String fileName, String methodName) at Microsoft.VisualStudio.Design.Serialization.CodeDom.VSCodeDomParser.OnMethodPopulateStatements(Object sender, EventArgs e) at System.CodeDom.CodeMemberMethod.get_Statements() at System.ComponentModel.Design.Serialization.TypeCodeDomSerializer.Deserialize(IDesignerSerializationManager manager, CodeTypeDeclaration declaration) at System.ComponentModel.Design.Serialization.CodeDomDesignerLoader.PerformLoad(IDesignerSerializationManager manager) at Microsoft.VisualStudio.Design.Serialization.CodeDom.VSCodeDomDesignerLoader.PerformLoad(IDesignerSerializationManager serializationManager) at Microsoft.VisualStudio.Design.Serialization.CodeDom.VSCodeDomDesignerLoader.DeferredLoadHandler.Microsoft.VisualStudio.TextManager.Interop.IVsTextBufferDataEvents.OnLoadCompleted(Int32 fReload) If I ignore the exception, form appears blank with no sign of grid control on it. However I can see the code for the grid in the designer file. Any pointer on this would be a great help. I have customized grid for my custom requirements like I have added custom text box n all. I have defined 3 constructors public GridControl() public GridControl(IContainer container) protected GridControl(SerializationInfo info, StreamingContext context)

    Read the article

  • using EventToCommand & PassEventArgsToCommand :: how to get sender, or better metaphor?

    - by JoeBrockhaus
    The point of what I'm doing is that there are a lot of things that need to happen in the viewmodel, but when the view has been loaded, not on constructor. I could wire up event handlers and send messages, but that just seems kinda sloppy to me. I'm implementing a base view and base viewmodel where this logic is contained so all my views get it by default, hopefully. Perhaps I can't even get what I'm wanting: the sender. I just figured this is what RoutedEventArgs.OriginalSource was supposed to be? [Edit] In the meantime, I've hooked up an EventHandler in the xaml.cs, and sure enough, OriginalSource is null there as well. So I guess really I need to know if it's possible to get a reference to the view/sender in the Command as well? [/Edit] My implementation requires that a helper class to my viewmodels which is responsible for creating 'windows' knows of the 'host' control that all the windows get added to. i'm open to suggestions for accomplishing this outside the scope of using eventtocommand. :) (the code for Unloaded is the same) #region ViewLoadedCommand private RelayCommand<RoutedEventArgs> _viewLoadedCommand = null; /// <summary> /// Command to handle the control's Loaded event. /// </summary> public RelayCommand<RoutedEventArgs> ViewLoadedCommand { get { // lazy-instantiate the RelayCommand on first usage if (_viewLoadedCommand == null) { _viewLoadedCommand = new RelayCommand<RoutedEventArgs>( e => this.OnViewLoadedCommand(e)); } return _viewLoadedCommand; } } #endregion ViewLoadedCommand #region View EventHandlers protected virtual void OnViewLoadedCommand(RoutedEventArgs e) { EventHandler handler = ViewLoaded; if (handler != null) { handler(this, e); } } #endregion

    Read the article

  • C# NullReferenceException when passing DataTable

    - by Timothy
    I've been struggling with a NullReferenceException and hope someone here will be able to point me in the right direction. I'm trying to create and populate a DataTable and then show the results in a DataGridView control. The basic code follows, and Execution stops with a NullReferenceException at the point where I invoke the new UpdateResults_Delegate. Oddly enough, I can trace entries.Rows.Count successfully before I return it from QueryEventEntries, so I can at least show 1) entries is not a null reference, and 2) the DataTable contains rows of data. I know I have to be doing something wrong, but I just don't know what. private delegate void UpdateResults_Delegate(DataTable entries); private void UpdateResults(DataTable entries) { dataGridView.DataSource = entries; } private void button_Click(object sender, EventArgs e) { Thread t = new Thread(new ThreadStart(PerformQuery)); t.Start(); } private void PerformQuery() { DateTime start = new DateTime(dateTimePicker1.Value.Year, dateTimePicker1.Value.Month, dateTimePicker1.Value.Day, 0, 0, 0); DateTime stop = new DateTime(dateTimePicker2.Value.Year, dateTimePicker2.Value.Month, dateTimePicker2.Value.Day, 0, 0, 0); DataTable entries = QueryEventEntries(start, stop); Invoke(new UpdateResults_Delegate(UpdateResults), entries); } private DataTable QueryEventEntries(DateTime start, DateTime stop) { DataTable entries = new DataTable(); entries.Columns.Add("colEventType", typeof(Int32)); entries.Columns.Add("colTimestamp", typeof(Int32)); entries.Columns.Add("colDetails", typeof(String)); ... conn.Open(); using (SqlDataReader r = cmd.ExecuteReader()) { while (r.Read()) { entries.Rows.Add(result.GetInt32(0), result.GetInt32(1), result.GetString(2)); } } return entries; }

    Read the article

  • Ninject InThreadScope Binding

    - by e36M3
    I have a Windows service that contains a file watcher that raises events when a file arrives. When an event is raised I will be using Ninject to create business layer objects that inside of them have a reference to an Entity Framework context which is also injected via Ninject. In my web applications I always used InRequestScope for the context, that way within one request all business layer objects work with the same Entity Framework context. In my current Windows service scenario, would it be sufficient to switch the Entity Framework context binding to a InThreadScope binding? In theory when an event handler in the service triggers it's executed under some thread, then if another file arrives simultaneously it will be executing under a different thread. Therefore both events will not be sharing an Entity Framework context, in essence just like two different http requests on the web. One thing that bothers me is the destruction of these thread scoped objects, when you look at the Ninject wiki: .InThreadScope() - One instance of the type will be created per thread. .InRequestScope() - One instance of the type will be created per web request, and will be destroyed when the request ends. Based on this I understand that InRequestScope objects will be destroyed (garbage collected?) when (or at some point after) the request ends. This says nothing however on how InThreadScope objects are destroyed. To get back to my example, when the file watcher event handler method is completed, the thread goes away (back to the thread pool?) what happens to the InThreadScope-d objects that were injected? EDIT: One thing is clear now, that when using InThreadScope() it will not destroy your object when the handler for the filewatcher exits. I was able to reproduce this by dropping many files in the folder and eventually I got the same thread id which resulted in the same exact Entity Framework context as before, so it's definitely not sufficient for my applications. In this case a file that came in 5 minutes later could be using a stale context that was assigned to the same thread before.

    Read the article

  • .net Class "is not a member of" Class .. even though it is?

    - by Matt Thrower
    Hi, Looking over some older code, I've run into a strange namespace error. Let's say I have two projects, HelperProject and WebProject. The full namespace of each - as given in application properties - is myEmployer.HelperProject and myEmployer.Web.WebProject. The pages in the web project are full of statements that use classes from the helper project. There are no imports/using statements but there is a reference to the helper project added in the bin. A few example lines might be: myEmployer.HelperProject.StringHelper.GetFixedLengthText(Text, "", Me.Width, 11) myEmploter.HelperProject.Utils.StringHelper.EstimatePixelLength(Text, 11) However every line that is written in this manner is throwing the error 'HelperProject' is not a member of 'myEmployer'. If you declare the statements like this: HelperProject.StringHelper.GetFixedLengthText(Text, "", Me.Width, 11) HelperProject.Utils.StringHelper.EstimatePixelLength(Text, 11) Everything seems fine. In the solution object browser and the bin folder, HelperProject appears with its full namespace, myEmployer.HelperProject. I don't want to have to change all the statements, and besides I suspect this is masking a more fundamental problem here. But I have no idea what's going on. Can anyone offer any pointers please? Cheers, Matt

    Read the article

  • Making one table equal to another without a delete *

    - by Joshua Atkins
    Hey, I know this is bit of a strange one but if anyone had any help that would be greatly appreciated. The scenario is that we have a production database at a remote site and a developer database in our local office. Developers make changes directly to the developer db and as part of the deployment process a C# application runs and produces a series of .sql scripts that we can execute on the remote side (essentially delete *, insert) but we are looking for something a bit more elaborate as the downtime from the delete * is unacceptable. This is all reference data that controls menu items, functionality etc of a major website. I have a sproc that essentially returns a diff of two tables. My thinking is that I can insert all the expected data in to a tmp table, execute the diff, and drop anything from the destination table that is not in the source and then upsert everything else. The question is that is there an easy way to do this without using a cursor? To illustrate the sproc returns a recordset structured like this: TableName Col1 Col2 Col3 Dest Src Anything in the recordset with TableName = Dest should be deleted (as it does not exist in src) and anything in Src should be upserted in to dest. I cannot think of a way to do this purely set based but my DB-fu is weak. Any help would be appreciated. Apologies if the explanation is sketchy; let me know if you need anymore details.

    Read the article

  • Matlab and .net problem with character string function input

    - by Peter
    I have a MATLAB function that I've compiled into a .net library. The function is a simple one that takes a character array as an input and a numeric array as output: function insert = money(dateLimit) .. insert = [1 2]; The function works fine when no function arguments are specified (a default argument is provided inside the function) Dim sf As New SpreadFinder.SpreadFinder Dim output = sf.money() As soon as an argument is specified .net complains. I'm thinking this should be easy and has been done before but searching through MATLAB documentation doesn't offer much help. Here's what I've tried. The sf.money() overload for the function with arguments is (numArgsOut as Integer, argsOut as MWArray, argsIn as MWArray) and hence that's what I've tried. What am I missing? Dim sf As New SpreadFinder.SpreadFinder Dim inputArgs(1) As Arrays.MWCharArray Dim dateLimitString As String = "some string" inputArgs(0) = New Arrays.MWCharArray(dateLimitString) Dim outputArgs(1) As Arrays.MWNumericArray outputArgs(0) = New Arrays.MWNumericArray() sf.money(1, outputArgs, inputArgs) Gives System.NullReferenceException : Object reference not set to an instance of an object. at MathWorks.MATLAB.NET.Utility.MWMCR.EvaluateFunction(String functionName, Int32 numArgsOut, Int32 numArgsIn, MWArray[] argsIn) at MathWorks.MATLAB.NET.Utility.MWMCR.EvaluateFunction(String functionName, Int32 numArgsOut, MWArray[]& argsOut, MWArray[] argsIn) at SpreadFinder.SpreadFinder.money(Int32 numArgsOut, MWArray[]& argsOut, MWArray[] argsIn)

    Read the article

  • Why ClassCastException on JMS ConnectionFactory lookup in JNDI?

    - by Derek Mahar
    What might be the cause of the following ClassCastException in a standalone JMS client application when it attempts to retrieve a connection factory from the JNDI provider? Exception in thread "main" java.lang.ClassCastException: javax.naming.Reference cannot be cast to javax.jms.ConnectionFactory Here is an abbreviated version of the JMS client that includes only its start() and stop() methods. The exception occurs on the first line in method start() which attempts to retrieve the connection factory from the JNDI provider, a remote LDAP server. The JMS connection factory and destination objects are on a remote JMS server. class JmsClient { private ConnectionFactory connectionFactory; private Connection connection; private Session session; private MessageConsumer consumer; private Topic topic; public void stop() throws JMSException { consumer.close(); session.close(); connection.close(); } public void start(Context context, String connectionFactoryName, String topicName) throws NamingException, JMSException { // ClassCastException occurs when retrieving connection factory. connectionFactory = (ConnectionFactory) context.lookup(connectionFactoryName); connection = connectionFactory.createConnection("username","password"); session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); topic = (Topic) context.lookup(topicName); consumer = session.createConsumer(topic); connection.start(); } private static Context getInitialContext() throws NamingException, IOException { String filename = "context.properties"; Properties props = new Properties(); props.load(new FileInputStream(filename)); return new InitialContext(props); } }

    Read the article

  • Types issue in F#

    - by Andry
    Hello! In my ongoing adventure deep diving into f# I am understanding a lot of this powerful language but there are things that I still do not understand so clearly. One of the most important issues I need to master is types. Well the book I am reading is very straight forward and introduces entities and main functionalities with a direct approach. The first thing I could get start with is types. It introduces the main types as list, option, tuples, and so on... It is clearly underlined that all these types are IMMUTABLE for many reasons regarding functional programming and data consistance in functional programing. Well, no problems until now... But now I am getting started with Concrete Types... Well... I have problems in managing with types like list, option, tuples, types created through new operator and concrete types created using type keyword (for abbreviations, concrete types...). So my question is: how can I efficently catalogue/distinguish all types of data in f#???? I can create a perfect separation among types in C#, VB.NET... FOr example in VB.NET there are value and reference types while in C# there are only references and also int, double are treated as objects (they are objects while in VB.NET a value type is not a object and there is a split in types for this reason). Well in F# I cannot create such differences among types in the language. Can you help me? I hope I was clear.

    Read the article

  • What should every developer know about databases?

    - by Aaronaught
    Whether we like it or not, many if not most of us developers either regularly work with databases or may have to work with one someday. And considering the amount of misuse and abuse in the wild, and the volume of database-related questions that come up every day, it's fair to say that there are certain concepts that developers should know - even if they don't design or work with databases today. So: What are the important concepts that developers and other software professionals ought to know about databases? Guidelines for Responses: Keep your list short. One concept per answer is best. Be specific. "Data modelling" may be an important skill, but what does that mean precisely? Explain your rationale. Why is your concept important? Don't just say "use indexes." Don't fall into "best practices." Convince your audience to go learn more. Upvote answers you agree with. Read other people's answers first. One high-ranked answer is a more effective statement than two low-ranked ones. If you have more to add, either add a comment or reference the original. Don't downvote something just because it doesn't apply to you personally. We all work in different domains. The objective here is to provide direction for database novices to gain a well-founded, well-rounded understanding of database design and database-driven development, not to compete for the title of most-important.

    Read the article

  • JAAS and WebLogic 10.3: Granting specific codebase permissions to a JAR bundled within an EAR

    - by Jason
    Here's my scenario: I have a JAR within the APP-INF/lib of my EAR, to be deployed within WebLogic 10g Release 3 against which I wish to grant specific permissions. e.g., grant codebase "file:/c:/somedir/my.jar" { permission java.net.SocketPermission "*:-","accept,connect,listen, resolve"; permission java.net.SocketPermission "localhost:-","accept,connect,listen,resolve"; permission java.net.SocketPermission "127.0.0.1:-","accept,connect,listen,resolve"; permission java.net.SocketPermission "230.0.0.1:-","accept,connect,listen,resolve"; permission java.util.PropertyPermission "*", "read,write"; permission java.lang.RuntimePermission "*"; permission java.io.FilePermission "<<ALL FILES>>","read,write,delete"; permission javax.security.auth.AuthPermission "*"; permission java.security.SecurityPermission "*"; }; Questions: Where is the best place to define this grant - in the java.policy of the JRE, WL server's weblogic.policy, or within a XML packaged within the EAR How do I define the codebase URL to the JAR? The examples I have seen have an explicit reference to the JAR on the file system, however I am deploying the JAR packaged up within an EAR. Thanks!

    Read the article

  • IValidator.Validate method and adding error message to a custom type

    - by user102533
    I have several server controls that implement the IValidator interface. As such, they have their own Validate() methods that look like this. public void Validate() { this.IsValid = true; if (someConditionFails()) { ErrorMessage = "Condition failed!"; this.IsValid = false; } } I understand that these Validate() methods are executed on postback before the load completed event that is executed before the save button's event handler. What I would like to do is pass in a reference to an instance of a custom class that collects all the error messages that I can access from Save button event handler. In other words, I would like to do something like this: public void Validate(ref SummaryOfErrorMessages sum) I guess I can't do this as the signature is different from what the IValidator interface has. The other option I can think of is on Load Completed event, I would iterate through all the validators on page, get the ones with IsValid = false and create my SummaryOfErrorMessages there. Does this sound right? Is there a better way of doing it?

    Read the article

  • Windows Azure ASP.NET MVC Role behaves strangely when redirecting from HTTP to HTTPS

    - by Rinat Abdullin
    Subj. I've got an ASP.NET 2 MVC Worker Role Application, that does not differ much from the default template. When attempting redirect from HTTP to HTTPS (this happens when we access constroller secured by the usual RequireSSL attribute implementation) we get blank page with "Bad Request" message. IntelliTrace shows this: Thrown: "The file '/Views/Home/LogOnUserControl.aspx' does not exist." (System.Web.HttpException) Call stack is really short: [External Code] App_Web_vfahw7gz.dll!ASP.views_shared_site_master.__Render__control1(System.Web.UI.HtmlTextWriter __w = {unknown}, System.Web.UI.Control parameterContainer = {unknown}) [External Code] App_Web_bsbqxr44.dll!ASP.views_home_index_aspx.ProcessRequest(System.Web.HttpContext context = {unknown}) [External Code] User control reference is the usual one in /Views/Shared/Site.Master: <div id="logindisplay"> <% Html.RenderPartial("LogOnUserControl"); %> </div> And partial view LogOnUserControl.ashx is located in Views/Shared (and it is ASHX, not ASPX). Problem shows up, when we try to access site pages, that require auth and redirect. These pages are secured by RequireSSL attribute (Redirect == true): [AttributeUsage(AttributeTargets.Method | AttributeTargets.Class, Inherited = true, AllowMultiple = false)] public sealed class RequireSslAttribute : FilterAttribute, IAuthorizationFilter { public bool Redirect { get; set; } // Methods public void OnAuthorization(AuthorizationContext filterContext) { // this get's messy, when we are running custom ports // within the local dev fabric. // hence we disable code in the debug #if !DEBUG if (filterContext == null) { throw new ArgumentNullException("filterContext"); } if (filterContext.HttpContext.Request.IsSecureConnection) return; var canRedirect = string.Equals(filterContext.HttpContext.Request.HttpMethod, "GET", StringComparison.OrdinalIgnoreCase); if (canRedirect && Redirect) { var builder = new UriBuilder { Scheme = "https", Host = filterContext.HttpContext.Request.Url.Host, Path = filterContext.HttpContext.Request.RawUrl }; filterContext.Result = new RedirectResult(builder.ToString()); } else { throw new HttpException(0x193, "Access forbidden. The requested resource requires an SSL connection."); } #endif } } Obviously we compile in RELEASE for this case. Does anybody have any idea, what could cause this strange exception and how to get rid of it?

    Read the article

  • loading child swf as3

    - by RichW
    Hi, I've been given an fla to make some changes too. Basically its a fairly long timeline animation with sound. So far I've successfully added a few button functions for sound etc.. but one has got me stumped. One of the buttons needs to load a child swf. I'm using the code below but I'm recieving an error - 'Error #1009: Cannot access a property or method of a null object reference'. I believe this may be refferring to an object that isn't set yet but I have no idea which one it is: Code: var mcExt:MovieClip = new MovieClip(); var ldr:Loader = new Loader(); ldr.contentLoaderInfo.addEventListener(Event.COMPLETE, swfLoaded); ldr.load(new URLRequest("Downloads.swf")); function swfLoaded(e:Event):void { mcExt = MovieClip(ldr.contentLoaderInfo.content); ldr.contentLoaderInfo.removeEventListener(Event.COMPLETE, swfLoaded); mcExt.x = 50; mcExt.y = 50; addChild(mcExt); } Any help on what is going wrong would be greatly appreciated! Thanks

    Read the article

  • Take most significant 8 bytes of the MD5 hash of a string as a long (in Ruby)

    - by Nate Murray
    Hey Friends, I'm trying to implement a java "hash" function in ruby. Here's the java side: import java.nio.charset.Charset; import java.security.MessageDigest; /** * @return most significant 8 bytes of the MD5 hash of the string, as a long */ protected long hash(String value) { byte[] md5hash; md5hash = md5Digest.digest(value.getBytes(Charset.forName("UTF8"))); long hash = 0L; for (int i = 0; i < 8; i++) { hash = hash << 8 | md5hash[i] & 0x00000000000000FFL; } return hash; } So far, my best guess in ruby is: # WRONG - doesn't work properly. #!/usr/bin/env ruby -wKU require 'digest/md5' require 'pp' md5hash = Digest::MD5.hexdigest("0").unpack("U*") pp md5hash hash = 0 0.upto(7) do |i| hash = hash << 8 | md5hash[i] & 0x00000000000000FF end pp hash Problem is, this ruby code doesn't match the java output. For reference, the above java code given these strings returns the corresponding long: "00038c53790ecedfeb2f83102e9115a522475d73" => -2059313900129568948 "0" => -3473083983811222033 "001211e8befc8ac22dd265ecaa77f8c227d0007f" => 3234260774580957018 Thoughts: I'm having problems getting the UTF8 bytes from the ruby string In ruby I'm using hexdigest, I suspect I should be using just digest instead The java code is taking the md5 of the UTF8 bytes whereas my ruby code is taking the bytes of the md5 (as hex) Any suggestions on how to get the exact same output in ruby?

    Read the article

  • What data structure to use / data persistence

    - by Dave
    I have an app where I need one table of information with the following fields: field 1 - int or char field 2 - string (max 10 char) field 3 - string (max 20 char) field 4 - float I need the program to filter on field 1 based upon a segmented control and select a field 2 from a picker. From this data I need to look up field 4 to use in a calculation. Total records will be about 200. I never see it go above 400 - 500. I am going to use a singleton which I am able to do, I just need help with the structure for this with data persistence. What type of data structure should I use for this and should I use NSNumber, NSString, etc. or old data types like float, Char, etc. I thought about a struct put into an array but there is probably a better way. This is new to me so any help or reference to examples would be great. I also thought about a plist or dictionary but it looks like it is just a lookup and a field which obviously won't work. Core data looked like overkill to me. Also, with any recommendation how should I get initial data into it? I want the user to be able to edit and add to the database. Sorry for the old terms, you can see what generation I am from... Thanks in advance!!!!

    Read the article

  • Modifying UINavigation Bar Buttons in SubViews

    - by james
    I'm having trouble trying to modify the navigation bar in the subview portion of my application. self.navigationItem.rightBarButtonItem = [[[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemDone target:self action:@selector(add_Clicked:)] autorelease]; I have no issues modifying the navigation bar in any of my UIViewControllers classes. The simplified application class outline is as such: AppDelegate - UIViewControllerA (has a left and a right navigationBarButton) - Subview is displayed when a SegmentControl is selected. (Within the subview, I'm trying to modify the right NavigationBarButton that is displayed) [self.view addSubview:newControllerName.view]; Methods I have attempted: Trying to set self.navigationItem.rightBarButtonItem within my subview to a new UIBarButtonItem. Creating a pointer to UIViewControllerA within my AppDelegate. The UIViewControllerA contains a function setNavButton I wrote to set the rightBarButtonItem to a button. Then I am referencing the AppDelegate's reference to UIViewControllerA and attempting to call setNavButton. I included a NSLog call to see if that function is being called and it is executing but the NavigationBar isn't being modified. I'm trying to avoid having to push a UIViewController after the SegmentControl is clicked in UIViewControllerA so that I can simulate the SegmentControl as tabs. I'm not getting any errors during compile or run time. Anyone have any ideas?

    Read the article

  • Design pattern to use instead of multiple inheritance

    - by mizipzor
    Coming from a C++ background, Im used to multiple inheritance. I like the feeling of a shotgun squarely aimed at my foot. Nowadays, I work more in C# and Java, where you can only inherit one baseclass but implement any number of interfaces (did I get the terminology right?). For example, lets consider two classes that implement a common interface but different (yet required) baseclasses: public class TypeA : CustomButtonUserControl, IMagician { public void DoMagic() { // ... } } public class TypeB : CustomTextUserControl, IMagician { public void DoMagic() { // ... } } Both classes are UserControls so I cant substitute the base class. Both needs to implement the DoMagic function. My problem now is that both implementations of the function are identical. And I hate copy-and-paste code. The (possible) solutions: I naturally want TypeA and TypeB to share a common baseclass, where I can write that identical function definition just once. However, due to having the limit of just one baseclass, I cant find a place along the hierarchy where it fits. One could also try to implement a sort of composite pattern. Putting the DoMagic function in a separate helper class, but the function here needs (and modifies) quite a lot of internal variables/fields. Sending them all as (reference) parameters would just look bad. My gut tells me that the adapter pattern could have a place here, some class to convert between the two when necessery. But it also feels hacky. I tagged this with language-agnostic since it applies to all languages that use this one-baseclass-many-interfaces approach. Also, please point out if I seem to have misunderstood any of the patterns I named. In C++ I would just make a class with the private fields, that function implementation and put it in the inheritance list. Whats the proper approach in C#/Java and the like?

    Read the article

  • How to get the ui:param value in Javabean

    - by mihaela
    Hello, I am learning facelets and Seam and I'm facing the following problem: I'm have 2 xhtml files, one includes the other and each one has its own Seam component as backing bean. I want to send and object to the included facelet and obtain that object in the backing bean corresponding to the included facelet. I'll take an example to explain better the situation: registration.xhtml with Seam component as backing bean Registration.java. In this class I have an object of type Person address.html with Seam component as backing bean Address.java. In this class i want to obtain the Person object from the Registration component and set the address. registration.xhtml includes the address.xhtml and passes an object using How to obtain this object in Address bean? Will be the same reference of the object from the Registration bean? is the solution of passing this object or there is another solution for that? (maybe f:attribute, but even in this case how do I obtain the object in bean) This example is simple and not necessarily realistic but I have a similar problem and I don't know how to solve it. Thanks in advance.

    Read the article

< Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >