Search Results

Search found 39473 results on 1579 pages for 'johny why'.

Page 257/1579 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • Why should I use an N-Tier Approach When using an SqlDatasource is ALOT EASIER ?

    - by The_AlienCoder
    When it comes to web development I have always tried to work SMART not HARD. So for along time My Aproach to interacting with databases in my AspNet projects has been this : 1) Create my stored procedures 2) Drag an SQLDatasource control on my aspx page 3) Bind a DataList Control to my SQLDatasource 4) Insert, Update & Delete by using my Datalist or programmatically using built in SQLDatasource methods e.g MySqlDataSource.InsertParameters["author"].DefaultValue = TextBox1.Text; MySqlDataSource.Insert(); Recently however I got a relatively easy web project. So I decided to employ a 3-tier Model...But I got exhausted halfway and just didnt seem worth it ! It seemed like I was working too HARD for a project that could have been easily accomplished by a couple of SqlDataSource Controls. So Why Is the N-Tier Model better than my Approach? Has it anything to do with performance? What are the advantages of the ObjectDataSource control over the SqlDataSource Control?

    Read the article

  • Why is TreeSet<T> an internal type in .NET?

    - by Justin Niessner
    So, I was just digging around Reflector trying to find the implementation details of HashSet (out of sheer curiosity based on the answer to another question here) and noticed the following: internal class TreeSet<T> : ICollection<T>, IEnumerable<T>, ICollection, IEnumerable, ISerializable, IDeserializationCallback Without looking too deep into the details, it looks like a Self-Balancing Binary Search Tree. My question is, is there anybody out there with the insight as to why this class is internal? Is it simply because the other collection types use it internally and hide the complexities of a BST from the general masses...or am I way off base?

    Read the article

  • why do we have to send the const " type " reference instead of just the types name to the constructo

    - by hamza
    i m trying to make a simple program ( & yes , it is a homework ) that can generate Dates , & like most of normal people : i made my Class attributes private , i tried to send the same type that i m working on to the constructor but the complier have not accept it , i did some research & i found out that in cases like that people generously send a const "type" reference to the constructor witch meant to me that have not understand OOP well so why do we have to send the const " type " reference instead of just the types name to the constructor ? & please give me some links or websites for beginners PS : sorry for my English

    Read the article

  • Why Can't I store XML in ASP.NET ListBox Value?

    - by mcass20
    Hello, Why does this work: ListItem item = new ListItem(); string value = lstAvailExtPropsToFilter.SelectedItem.Text +" = "+ txtExtPropToFilter.Text; string text = lstAvailExtPropsToFilter.SelectedItem.Text + " = " + txtExtPropToFilter.Text; item.Text = text; item.Value = value; lstExtPropsToFilter.Items.Add(item); But not this: ListItem item = new ListItem(); string value = string.Format("<key>{0}</key><value>{1}</value>", lstAvailExtPropsToFilter.SelectedItem.Text, txtExtPropToFilter.Text); string text = lstAvailExtPropsToFilter.SelectedItem.Text + " = " + txtExtPropToFilter.Text; item.Text = text; item.Value = value; lstExtPropsToFilter.Items.Add(item);

    Read the article

  • Delphi 7 - Why does Windows 7 change encoding of characters in runtime?

    - by LukLed
    I have a delphi 7 form: and my code: when I run this form in Windows 7, I see: In design time, form had polish letters in first label, but it doesn't have them in runtime. It looks ok on Vista or Windows XP. When I set caption of second label in code, everything works fine and characters are properly encoded. First 5 codes of top label on Windows 7: 65 97 69 101 83 First 5 codes of top label on Windows Vista/XP: 165 185 202 234 140 First 5 codes of bottom label on every system: 165 185 202 234 140 Windows 7 changes encoding, why? My system settings seem to be ok. I have proper language set for non-unicode applications in control panel.

    Read the article

  • Why can't I expose an interface in a .NET asmx web service?

    - by mcliedtk
    I have a .NET web service (using asmx...have not upgraded to WCF yet) that exposes the following: public class WidgetVersion1 : IWidget {} public class WidgetVersion2 : IWidget {} When I attempt to bind to the web service, I get the following serialization error: Cannot serialize member WidgetVersion1 of type IWidget because it is an interface. I have tried adding various attributes to the IWidget interface (XmlIgnore, SoapIgnore, NonSerialized), but they are not valid on an interface. Does anyone know why I am unable to expose the interface? I assume WSDL does not support interfaces, but couldn't .NET get around this by simply not serializing the interface? Are there any ways around this apart from removing the IWidget interface from the WidgetVersion1 and WidgetVersion2 class definitions?

    Read the article

  • Why do i get segfault at the end of the application after everything's been done properly ?

    - by VaioIsBorn
    #include <string.h> #include <stdlib.h> #include <stdio.h> int main(void) { unsigned char *stole; unsigned char pass[] = "m4ak47"; printf("Vnesi password: \t"); scanf("%s", stole); if(strncmp(stole, pass, sizeof(pass)) != 0) { printf("wrong password!\n"); exit(0); } else printf("Password correct\n"); printf("some stuf here...\n\n"); return 0; } This program is working nice, but with one problem - if the password is correct then it DOES do the printing of 'some stuf here...' but it also shows me segmentation fault error at the end. Why ?

    Read the article

  • Why does derivative trading position always require C++ knowledge?

    - by Jeffrey
    I’ve never worked in trading environment before and I was curious to see that few of the trading houses seem to use C# but most of them do heavily rely on C++. Why is it? Is it because C++ is better performance wise? Is it because of legacy code base? Is it because cross platform issue? What about dynamic languages (ruby, python)? Are they too slow for this kind of work in terms of performance? Updated: If realibility and performance are important would "Erlang" be the "next big thing" in trading platform?

    Read the article

  • Why in the following code the output is different when I compile or run it more than once

    - by Sanjeev
    class Name implements Runnable { public void run() { for (int x = 1; x <= 3; x++) { System.out.println("Run by " + Thread.currentThread().getName() + ", x is " + x); } } } public class Threadtest { public static void main(String [] args) { // Make one Runnable Name nr = new Name(); Thread one = new Thread(nr); Thread two = new Thread(nr); Thread three = new Thread(nr); one.setName("A"); two.setName("B"); three.setName("C"); one.start(); two.start(); three.start(); } } The answer is different while compiling and running more then one time I don't know why? any idea.

    Read the article

  • Why can't I inject value null with Ninjects ConstructorArgument?

    - by stiank81
    When using Ninjects ConstructorArgument you can specify the exact value to inject to specific parameters. Why can't this value be null, or how can I make it work? Maybe it's not something you'd like to do, but I want to use it in my unit tests.. Example: public class Ninja { private readonly IWeapon _weapon; public Ninja(IWeapon weapon) { _weapon = weapon; } } public void SomeFunction() { var kernel = new StandardKernel(); var ninja = kernel.Get<Ninja>(new ConstructorArgument("weapon", null)); }

    Read the article

  • Why do MSTests Assert.AreEqual(1.0, double.NaN, 0.0) pass?

    - by Egil Hansen
    Short question, why do Assert.AreEqual(1.0, double.NaN, 0.0) pass when Assert.AreEqual(1.0, double.NaN) do not? Is it an error in MSTest or am I missing something here? Best regards, Egil. Update: Should probably add, that the reason behind my question is, that I have a bunch of unit tests that unfortunately passed due to the result of some linear algebraic matrix operation being NaN or (+/-)Infinity. The unit tests are fine, but since Assert.AreEqual on doubles with a delta will pass when actual or/and expected are NaN or Infinity, I was left to believe that the code I was testing was correct.

    Read the article

  • Why await is not taken in consideration after deploy?

    - by Cristian Boariu
    I have a method which does some sync calls to a specific REST api, something like: WSRequestHolder url = WS.url("rest_api_url"); Promise<WS.Response> promisePerPage = url.get(); promisePerPage.getWrappedPromise().await(3000, TimeUnit.MILLISECONDS); WS.Response responsePerPage = promisePerPage.get(); ProductsWrapper productsWrapper = new Gson().fromJson(responsePerPage.getBody(), ProductsWrapper.class); As you notice, I put 3 seconds between calls so each request can be parsed in time and inserted in DB. All works great locally but after I deploy to cloud, all goes continuously, without any more waiting (3 seconds) between requests... Do you know why?

    Read the article

  • Why is this std::bind not converted to std::function?

    - by dauphic
    Why is the nested std::bind in the below code not implicitly converted to an std::function<void()> by any of the major compilers (VS2010/2012, gcc, clang)? Is this standard behavior, or a bug? #include <functional> void bar(int, std::function<void()>) { } void foo() { } int main() { std::function<void(int, std::function<void()>)> func; func = std::bind(bar, 5, std::bind(foo)); std::cin.get(); return 0; }

    Read the article

  • How to figure out why ssh session does not exit sometimes?

    - by WilliamKF
    I have a C++ application that uses ssh to summon a connection to the server. I find that sometimes the ssh session is left lying around long after the command to summon the server has exited. Looking at the Centos4 man page for ssh I see the following: The session terminates when the command or shell on the remote machine exits and all X11 and TCP/IP connections have been closed. The exit status of the remote program is returned as the exit status of ssh. I see that the command has exited, so I imagine not all the X11 and TCP/IP connections have been closed. How can I figure out which of these ssh is waiting for so that I can fix my summon command's C++ application to clean up whatever is being left behind that keeps the ssh open. I wonder why this failure only occurs some of the time and not on every invocation? It seems to occur approximately 50% of the time. What could my C++ application be leaving around to trigger this?

    Read the article

  • Why is doing a top(1) on an indexed column in mssql slow?

    - by reinier
    I'm puzzled by the following. I have a DB with around 10 million rows, and (among other indices) on 1 column is an index. Now I have 700k rows where the campaignid is indeed 3835 For all these rows, the connectionid is the same. I just want to find out this connectionid. use messaging_db; SELECT TOP (1) connectionid FROM outgoing_messages WITH (NOLOCK) WHERE (campaignid_int = 3835) Now this query takes approx 30 seconds to perform! I (with my small db knowledge) would expect that it would take any of the rows, and return me that connectionid If I test this same query for a campaign which only has 1 entry, it goes really fast. So the index works. How would I tackle this and why does this not work?

    Read the article

  • Python: why does str() on some text from a UTF-8 file give a UnicodeDecodeError?

    - by AP257
    I'm processing a UTF-8 file in Python, and have used simplejson to load it into a dictionary. However, I'm getting a UnicodeDecodeError when I try to turn one of the dictionary values into a string: f = open('my_json.json', 'r') master_dictionary = json.load(f) #some json wrangling, then it fails on this line... mysql_string += " ('" + str(v_dict['code']) Traceback (most recent call last): File "my_file.py", line 25, in <module> str(v_dict['code']) + "'), " UnicodeEncodeError: 'ascii' codec can't encode character u'\xf4' in position 35: ordinal not in range(128) Why is Python even using ASCII? I thought it used UTF-8 by default, and this is a UTF-8 file. What is the problem?

    Read the article

  • Why are the default UI controls in my iPhone app blurred?

    - by Tom H
    Why would the default iPhone interface elements, specifically the UISwitch (unmodified) and a UISegmentedControl appear slightly blurred? I have not changed them or called any private APIs. This blurring occurs when I run it both in the simulator and when I load it on my iPod Touch, so I don't think it's a one off drawing glitch. These elements were created via some code (initWithFrame:) not in interface builder. Here is a screenshot of the blurring in the simulator: http://drp.ly/14rS6a It looks similar on the actual device. Thanks for your help

    Read the article

  • Why is an anonymous inner class containing nothing generated from this code?

    - by Andrew Westberg
    When run through javac on the cmd line Sun JVM 1.6.0_20, this code produces 6 .class files OuterClass.class OuterClass$1.class OuterClass$InnerClass.class OuterClass$InnerClass2.class OuterClass$InnerClass$InnerInnerClass.class OuterClass$PrivateInnerClass.class When run through JDT in eclipse, it produces only 5 classes. OuterClass.class OuterClass$1.class OuterClass$InnerClass.class OuterClass$InnerClass2.class OuterClass$InnerClass$InnerInnerClass.class OuterClass$PrivateInnerClass.class When decompiled, OuterClass$1.class contains nothing. Where is this extra class coming from and why is it created? package com.test; public class OuterClass { public class InnerClass { public class InnerInnerClass { } } public class InnerClass2 { } //this class should not exist in OuterClass after dummifying private class PrivateInnerClass { private String getString() { return "hello PrivateInnerClass"; } } public String getStringFromPrivateInner() { return new PrivateInnerClass().getString(); } }

    Read the article

  • Why does Google append while(1); in front of their JSON responses?

    - by Andrew Koester
    This is something I've always been curious about, is exactly why Google appends while(1); in front of their (private) JSON responses. For example, here's a response while turning a calendar on and off in Google Calendar: while(1);[['u',[['smsSentFlag','false'],['hideInvitations','false'],['remindOnRespondedEventsOnly','true'],['hideInvitations_remindOnRespondedEventsOnly','false_true'],['Calendar ID stripped for privacy','false'],['smsVerifiedFlag','true']]]] I would assume this is to prevent people from doing an eval() on it, but all you'd really have to do is replace the while and then you'd be set. I would assume eval prevention is to make sure people write safe JSON parsing code. I've seen this used in a couple other places, too, but a lot more so with Google (Mail, Calendar, Contacts, etc.) Strangely enough, Google Docs starts with &&&START&&& instead, and Google Contacts seems to start with while(1); &&&START&&&. Does anyone know what's going on here?

    Read the article

  • If free() knows the length of my array, why can't I ask for it in my own code?

    - by Chris Cooper
    I know that it's a common convention to pass the length of dynamically allocated arrays to functions that manipulate them: void initializeAndFree(int* anArray, int length); int main(){ int arrayLength = 0; scanf("%d", &arrayLength); int* myArray = (int*)malloc(sizeof(int)*arrayLength); initializeAndFree(myArray, arrayLength); } void initializeAndFree(int* anArray, int length){ int i = 0; for (i = 0; i < length; i++) { anArray[i] = 0; } free(anArray); } but if there's no way for me to get the length of the allocated memory from a pointer, how does free() "automagically" know what to deallocate? Why can't I get in on the magic, as a C programmer? Where does free() get its free (har-har) knowledge from?

    Read the article

  • Why is iphone Simulator not rendering HTML5 page correctly?

    - by user364978
    hello all, I have a page I am developing in .net using HTML5 intended for a WebView in an iphone App. The page looks just fine in Safari. When I load it in the iphone Simulator it is rendering as plain text, no styles or js loading. I thought it might be an issue with .net, but seeing as it works in Safari i am stumped. When I use the XHTML doctype it works just fine in the Simulator. Any ideas why this is occurring and what the fix may be? Thanks!

    Read the article

  • Why does my ASP.NET user control's field value reset to 0?

    - by Innogetics
    In the code below, why does the groupId value reset to 0 during Page_Load event? Maybe perhaps the AccountGrid created with groupId 1 is not the one that is loaded to the page? public partial class AccountGrid : System.Web.UI.UserControl { int groupId = 0; public AccountGrid() { } // an aspx page creates AccountGrid with "new AccountGrid(1)" public AccountGrid(int groupId) { this.groupId = groupId; } protected void Page_Load(object sender, EventArgs e) { DataAccessFacade facade = new DataAccessFacade(); // groupId resets to 0 here... grdAccount.DataSource = facade.GetAccountsByAccountGroupId(this.groupId); grdAccount.DataBind(); } } In my page, I have public partial class Default : System.Web.UI.Page { public Default() { } public void Page_Load(object sender, EventArgs e) { ctlAccountGrid = new Views.Controls.Account.AccountGrid(1); // should I do databind? ctlAccountGrid.DataBind(); } }

    Read the article

  • Why do debug symbols so adversely affect the performance of threaded applications on Linux?

    - by fluffels
    Hi. I'm writing a ray tracer. Recently, I added threading to the program to exploit the additional cores on my i5 Quad Core. In a weird turn of events the debug version of the application is now running slower, but the optimized build is running faster than before I added threading. I'm passing the "-g -pg" flags to gcc for the debug build and the "-O3" flag for the optimized build. Host system: Ubuntu Linux 10.4 AMD64. I know that debug symbols add significant overhead to the program, but the relative performance has always been maintained. I.e. a faster algorithm will always run faster in both debug and optimization builds. Any idea why I'm seeing this behavior?

    Read the article

  • Why do my WPF UIElements NOT have OnPreview events?

    - by Matt.M
    I'm building a custom Silverlight UserControl which needs to listen to events using Preview/Tunneling, but for some reason the compiler is telling me they are not recognized or accessible. For example, I can add an event handler to MouseLeftButtonDown, but not PreviewMouseLeftButtonDown. This doesn't make sense because according to Microsoft (http://msdn.microsoft.com/en-us/library/system.windows.uielement_members(v=VS.100).aspx) all UIElements should have Preview events attached. Any ideas as to why this is happening? I'm using Visual Studio 2010 Trial, Blend 4 RC and .Net 4, if that makes a difference.

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >