Search Results

Search found 21802 results on 873 pages for 'erx vb next coder'.

Page 1/873 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Why is VB so popular?

    - by aaaidan
    To me, Visual Basic seems clumsy, ugly, error-prone, and difficult to read. I'll let others explain why. While VB.net has clearly been a huge leap forward for the language in terms of features, I still don't understand why anyone would choose to code in VB over, say, C#. However, I still see (what seems to be) the vast majority of commercial web apps from "MS shops" are built in VB. I could stand corrected on this, but VB still seems more popular than it deserves. Can anyone help answer any (or all) of these questions: Am I missing something with VB? Is it easier to learn, or "friendlier" than C#? Are there features I don't know about? Why is VB/VB.net so frequently used today, especially in web projects?

    Read the article

  • Should a c# dev switch to VB.net when the team language base is mixed?

    - by jjr2527
    I recently joined a new development team where the language preferences are mixed on the .net platform. Dev 1: Knows VB.net, does not know c# Dev 2: Knows VB.net, does not know c# Dev 3: Knows c# and VB.net, prefers c# Dev 4: Knows c# and VB6(VB.net should be pretty easy to pick up), prefers c# It seems to me that the thought leaders in the .net space are c# devs almost universally. I also thought that some 3rd party tools didn't support VB.net but when I started looking into it I didn't find any good examples. I would prefer to get the whole team on c# but if there isn't any good reason to force the issue aside from preference then I don't think that is the right choice. Are there any reasons I should lead folks away from VB.net?

    Read the article

  • Addressing a variable in VB

    - by Jeff
    Why doesn't Visual Basic.NET have the addressof operator like C#? In C#, one can int i = 123; int* addr = &i; But VB has no equivalent counter part. It seems like it should be important. UPDATE Since there's some interest, Im copying my response to Strilanc below. The case I ran into didnt necessitate pointers by any means, but I was trying to trouble shoot a unit test that was failing and there was some confusion over whether or not an object being used at one point in the stack was the same object as an object several methods away.

    Read the article

  • VB.NET IF() Coalesce and “Expression Expected” Error

    - by Jeff Widmer
    I am trying to use the equivalent of the C# “??” operator in some VB.NET code that I am working in. This StackOverflow article for “Is there a VB.NET equivalent for C#'s ?? operator?” explains the VB.NET IF() statement syntax which is exactly what I am looking for... and I thought I was going to be done pretty quickly and could move on. But after implementing the IF() statement in my code I started to receive this error: Compiler Error Message: BC30201: Expression expected. And no matter how I tried using the “IF()” statement, whenever I tried to visit the aspx page that I was working on I received the same error. This other StackOverflow article Using VB.NET If vs. IIf in binding/rendering expression indicated that the VB.NET IF() operator was not available until VS2008 or .NET Framework 3.5.  So I checked the Web Application project properties but it was targeting the .NET Framework 3.5: So I was still not understanding what was going on, but then I noticed the version information in the detailed compiler output of the error page: This happened to be a C# project, but with an ASPX page with inline VB.NET code (yes, it is strange to have that but that is the project I am working on).  So even though the project file was targeting the .NET Framework 3.5, the ASPX page was being compiled using the .NET Framework 2.0.  But why?  Where does this get set?  How does ASP.NET know which version of the compiler to use for the inline code? For this I turned to the web.config.  Here is the system.codedom/compilers section that was in the web.config for this project: <system.codedom>     <compilers>         <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">             <providerOption name="CompilerVersion" value="v3.5" />             <providerOption name="WarnAsError" value="false" />         </compiler>     </compilers> </system.codedom> Keep in mind that this is a C# web application project file but my aspx file has inline VB.NET code.  The web.config does not have any information for how to compile for VB.NET so it defaults to .NET 2.0 (instead of 3.5 which is what I need). So the web.config needed to include the VB.NET compiler option.  Here it is with both the C# and VB.NET options (I copied the VB.NET config from a new VB.NET Web Application project file).     <system.codedom>         <compilers>             <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">                 <providerOption name="CompilerVersion" value="v3.5" />                 <providerOption name="WarnAsError" value="false" />             </compiler>       <compiler language="vb;vbs;visualbasic;vbscript" extension=".vb" warningLevel="4" type="Microsoft.VisualBasic.VBCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">         <providerOption name="CompilerVersion" value="v3.5"/>         <providerOption name="OptionInfer" value="true"/>         <providerOption name="WarnAsError" value="false"/>       </compiler>     </compilers>     </system.codedom>   So the inline VB.NET code on my aspx page was being compiled using the .NET Framework 2.0 when it really needed to be compiled with the .NET Framework 3.5 compiler in order to take advantage of the VB.NET IF() coalesce statement.  Without the VB.NET web.config compiler option, the default is to compile using the .NET Framework 2.0 and the VB.NET IF() coalesce statement does not exist (at least in the form that I want it in).  FYI, there is an older IF statement in VB.NET 2.0 compiler which is why it is giving me the unusual “Expression Expected” error message – see this article for when VB.NET got the new updated version. EDIT (2011-06-20): I had made a wrong assumption in the first version of this blog post.  After a little more research and investigation I was able to figure out that the issue was in the web.config and not with the IIS App Pool.  Thanks to the comment from James which forced me to look into this again.

    Read the article

  • How to do this in VB 2010 (C# to VB conversion)

    - by user203687
    I would like to have the following to be translated to VB 2010 (with advanced syntaxes) _domainContext.SubmitChanges( submitOperation => { _domainContext.Load<Customer>( _domainContext.GetCustomersQuery(), LoadBehavior.RefreshCurrent, loadOperation => { var results = _domainContext.Customers.Where( entity => !loadOperation.Entities.Contains(entity)).ToList(); results.ForEach( enitity => _domainContext.Customers.Detach(entity)); }, null); }, null); I managed to get the above with other ways (but not using anonymous methods). I would like to see all the advanced syntaxes available in VB 2010 to be applied to the above. Can anyone help me on this? thanks

    Read the article

  • Red Gate Coder interviews: Alex Davies

    - by Michael Williamson
    Alex Davies has been a software engineer at Red Gate since graduating from university, and is currently busy working on .NET Demon. We talked about tackling parallel programming with his actors framework, a scientific approach to debugging, and how JavaScript is going to affect the programming languages we use in years to come. So, if we start at the start, how did you get started in programming? When I was seven or eight, I was given a BBC Micro for Christmas. I had asked for a Game Boy, but my dad thought it would be better to give me a proper computer. For a year or so, I only played games on it, but then I found the user guide for writing programs in it. I gradually started doing more stuff on it and found it fun. I liked creating. As I went into senior school I continued to write stuff on there, trying to write games that weren’t very good. I got a real computer when I was fourteen and found ways to write BASIC on it. Visual Basic to start with, and then something more interesting than that. How did you learn to program? Was there someone helping you out? Absolutely not! I learnt out of a book, or by experimenting. I remember the first time I found a loop, I was like “Oh my God! I don’t have to write out the same line over and over and over again any more. It’s amazing!” When did you think this might be something that you actually wanted to do as a career? For a long time, I thought it wasn’t something that you would do as a career, because it was too much fun to be a career. I thought I’d do chemistry at university and some kind of career based on chemical engineering. And then I went to a careers fair at school when I was seventeen or eighteen, and it just didn’t interest me whatsoever. I thought “I could be a programmer, and there’s loads of money there, and I’m good at it, and it’s fun”, but also that I shouldn’t spoil my hobby. Now I don’t really program in my spare time any more, which is a bit of a shame, but I program all the rest of the time, so I can live with it. Do you think you learnt much about programming at university? Yes, definitely! I went into university knowing how to make computers do anything I wanted them to do. However, I didn’t have the language to talk about algorithms, so the algorithms course in my first year was massively important. Learning other language paradigms like functional programming was really good for breadth of understanding. Functional programming influences normal programming through design rather than actually using it all the time. I draw inspiration from it to write imperative programs which I think is actually becoming really fashionable now, but I’ve been doing it for ages. I did it first! There were also some courses on really odd programming languages, a bit of Prolog, a little bit of C. Having a little bit of each of those is something that I would have never done on my own, so it was important. And then there are knowledge-based courses which are about not programming itself but things that have been programmed like TCP. Those are really important for examples for how to approach things. Did you do any internships while you were at university? Yeah, I spent both of my summers at the same company. I thought I could code well before I went there. Looking back at the crap that I produced, it was only surpassed in its crappiness by all of the other code already in that company. I’m so much better at writing nice code now than I used to be back then. Was there just not a culture of looking after your code? There was, they just didn’t hire people for their abilities in that area. They hired people for raw IQ. The first indicator of it going wrong was that they didn’t have any computer scientists, which is a bit odd in a programming company. But even beyond that they didn’t have people who learnt architecture from anyone else. Most of them had started straight out of university, so never really had experience or mentors to learn from. There wasn’t the experience to draw from to teach each other. In the second half of my second internship, I was being given tasks like looking at new technologies and teaching people stuff. Interns shouldn’t be teaching people how to do their jobs! All interns are going to have little nuggets of things that you don’t know about, but they shouldn’t consistently be the ones who know the most. It’s not a good environment to learn. I was going to ask how you found working with people who were more experienced than you… When I reached Red Gate, I found some people who were more experienced programmers than me, and that was difficult. I’ve been coding since I was tiny. At university there were people who were cleverer than me, but there weren’t very many who were more experienced programmers than me. During my internship, I didn’t find anyone who I classed as being a noticeably more experienced programmer than me. So, it was a shock to the system to have valid criticisms rather than just formatting criticisms. However, Red Gate’s not so big on the actual code review, at least it wasn’t when I started. We did an entire product release and then somebody looked over all of the UI of that product which I’d written and say what they didn’t like. By that point, it was way too late and I’d disagree with them. Do you think the lack of code reviews was a bad thing? I think if there’s going to be any oversight of new people, then it should be continuous rather than chunky. For me I don’t mind too much, I could go out and get oversight if I wanted it, and in those situations I felt comfortable without it. If I was managing the new person, then maybe I’d be keener on oversight and then the right way to do it is continuously and in very, very small chunks. Have you had any significant projects you’ve worked on outside of a job? When I was a teenager I wrote all sorts of stuff. I used to write games, I derived how to do isomorphic projections myself once. I didn’t know what the word was so I couldn’t Google for it, so I worked it out myself. It was horrifically complicated. But it sort of tailed off when I started at university, and is now basically zero. If I do side-projects now, they tend to be work-related side projects like my actors framework, NAct, which I started in a down tools week. Could you explain a little more about NAct? It is a little C# framework for writing parallel code more easily. Parallel programming is difficult when you need to write to shared data. Sometimes parallel programming is easy because you don’t need to write to shared data. When you do need to access shared data, you could just have your threads pile in and do their work, but then you would screw up the data because the threads would trample on each other’s toes. You could lock, but locks are really dangerous if you’re using more than one of them. You get interactions like deadlocks, and that’s just nasty. Actors instead allows you to say this piece of data belongs to this thread of execution, and nobody else can read it. If you want to read it, then ask that thread of execution for a piece of it by sending a message, and it will send the data back by a message. And that avoids deadlocks as long as you follow some obvious rules about not making your actors sit around waiting for other actors to do something. There are lots of ways to write actors, NAct allows you to do it as if it was method calls on other objects, which means you get all the strong type-safety that C# programmers like. Do you think that this is suitable for the majority of parallel programming, or do you think it’s only suitable for specific cases? It’s suitable for most difficult parallel programming. If you’ve just got a hundred web requests which are all independent of each other, then I wouldn’t bother because it’s easier to just spin them up in separate threads and they can proceed independently of each other. But where you’ve got difficult parallel programming, where you’ve got multiple threads accessing multiple bits of data in multiple ways at different times, then actors is at least as good as all other ways, and is, I reckon, easier to think about. When you’re using actors, you presumably still have to write your code in a different way from you would otherwise using single-threaded code. You can’t use actors with any methods that have return types, because you’re not allowed to call into another actor and wait for it. If you want to get a piece of data out of another actor, then you’ve got to use tasks so that you can use “async” and “await” to await asynchronously for it. But other than that, you can still stick things in classes so it’s not too different really. Rather than having thousands of objects with mutable state, you can use component-orientated design, where there are only a few mutable classes which each have a small number of instances. Then there can be thousands of immutable objects. If you tend to do that anyway, then actors isn’t much of a jump. If I’ve already built my system without any parallelism, how hard is it to add actors to exploit all eight cores on my desktop? Usually pretty easy. If you can identify even one boundary where things look like messages and you have components where some objects live on one side and these other objects live on the other side, then you can have a granddaddy object on one side be an actor and it will parallelise as it goes across that boundary. Not too difficult. If we do get 1000-core desktop PCs, do you think actors will scale up? It’s hard. There are always in the order of twenty to fifty actors in my whole program because I tend to write each component as actors, and I tend to have one instance of each component. So this won’t scale to a thousand cores. What you can do is write data structures out of actors. I use dictionaries all over the place, and if you need a dictionary that is going to be accessed concurrently, then you could build one of those out of actors in no time. You can use queuing to marshal requests between different slices of the dictionary which are living on different threads. So it’s like a distributed hash table but all of the chunks of it are on the same machine. That means that each of these thousand processors has cached one small piece of the dictionary. I reckon it wouldn’t be too big a leap to start doing proper parallelism. Do you think it helps if actors get baked into the language, similarly to Erlang? Erlang is excellent in that it has thread-local garbage collection. C# doesn’t, so there’s a limit to how well C# actors can possibly scale because there’s a single garbage collected heap shared between all of them. When you do a global garbage collection, you’ve got to stop all of the actors, which is seriously expensive, whereas in Erlang garbage collections happen per-actor, so they’re insanely cheap. However, Erlang deviated from all the sensible language design that people have used recently and has just come up with crazy stuff. You can definitely retrofit thread-local garbage collection to .NET, and then it’s quite well-suited to support actors, even if it’s not baked into the language. Speaking of language design, do you have a favourite programming language? I’ll choose a language which I’ve never written before. I like the idea of Scala. It sounds like C#, only with some of the niggles gone. I enjoy writing static types. It means you don’t have to writing tests so much. When you say it doesn’t have some of the niggles? C# doesn’t allow the use of a property as a method group. It doesn’t have Scala case classes, or sum types, where you can do a switch statement and the compiler checks that you’ve checked all the cases, which is really useful in functional-style programming. Pattern-matching, in other words. That’s actually the major niggle. C# is pretty good, and I’m quite happy with C#. And what about going even further with the type system to remove the need for tests to something like Haskell? Or is that a step too far? I’m quite a pragmatist, I don’t think I could deal with trying to write big systems in languages with too few other users, especially when learning how to structure things. I just don’t know anyone who can teach me, and the Internet won’t teach me. That’s the main reason I wouldn’t use it. If I turned up at a company that writes big systems in Haskell, I would have no objection to that, but I wouldn’t instigate it. What about things in C#? For instance, there’s contracts in C#, so you can try to statically verify a bit more about your code. Do you think that’s useful, or just not worthwhile? I’ve not really tried it. My hunch is that it needs to be built into the language and be quite mathematical for it to work in real life, and that doesn’t seem to have ended up true for C# contracts. I don’t think anyone who’s tried them thinks they’re any good. I might be wrong. On a slightly different note, how do you like to debug code? I think I’m quite an odd debugger. I use guesswork extremely rarely, especially if something seems quite difficult to debug. I’ve been bitten spending hours and hours on guesswork and not being scientific about debugging in the past, so now I’m scientific to a fault. What I want is to see the bug happening in the debugger, to step through the bug happening. To watch the program going from a valid state to an invalid state. When there’s a bug and I can’t work out why it’s happening, I try to find some piece of evidence which places the bug in one section of the code. From that experiment, I binary chop on the possible causes of the bug. I suppose that means binary chopping on places in the code, or binary chopping on a stage through a processing cycle. Basically, I’m very stupid about how I debug. I won’t make any guesses, I won’t use any intuition, I will only identify the experiment that’s going to binary chop most effectively and repeat rather than trying to guess anything. I suppose it’s quite top-down. Is most of the time then spent in the debugger? Absolutely, if at all possible I will never debug using print statements or logs. I don’t really hold much stock in outputting logs. If there’s any bug which can be reproduced locally, I’d rather do it in the debugger than outputting logs. And with SmartAssembly error reporting, there’s not a lot that can’t be either observed in an error report and just fixed, or reproduced locally. And in those other situations, maybe I’ll use logs. But I hate using logs. You stare at the log, trying to guess what’s going on, and that’s exactly what I don’t like doing. You have to just look at it and see does this look right or wrong. We’ve covered how you get to grip with bugs. How do you get to grips with an entire codebase? I watch it in the debugger. I find little bugs and then try to fix them, and mostly do it by watching them in the debugger and gradually getting an understanding of how the code works using my process of binary chopping. I have to do a lot of reading and watching code to choose where my slicing-in-half experiment is going to be. The last time I did it was SmartAssembly. The old code was a complete mess, but at least it did things top to bottom. There wasn’t too much of some of the big abstractions where flow of control goes all over the place, into a base class and back again. Code’s really hard to understand when that happens. So I like to choose a little bug and try to fix it, and choose a bigger bug and try to fix it. Definitely learn by doing. I want to always have an aim so that I get a little achievement after every few hours of debugging. Once I’ve learnt the codebase I might be able to fix all the bugs in an hour, but I’d rather be using them as an aim while I’m learning the codebase. If I was a maintainer of a codebase, what should I do to make it as easy as possible for you to understand? Keep distinct concepts in different places. And name your stuff so that it’s obvious which concepts live there. You shouldn’t have some variable that gets set miles up the top of somewhere, and then is read miles down to choose some later behaviour. I’m talking from a very much SmartAssembly point of view because the old SmartAssembly codebase had tons and tons of these things, where it would read some property of the code and then deal with it later. Just thousands of variables in scope. Loads of things to think about. If you can keep concepts separate, then it aids me in my process of fixing bugs one at a time, because each bug is going to more or less be understandable in the one place where it is. And what about tests? Do you think they help at all? I’ve never had the opportunity to learn a codebase which has had tests, I don’t know what it’s like! What about when you’re actually developing? How useful do you find tests in finding bugs or regressions? Finding regressions, absolutely. Running bits of code that would be quite hard to run otherwise, definitely. It doesn’t happen very often that a test finds a bug in the first place. I don’t really buy nebulous promises like tests being a good way to think about the spec of the code. My thinking goes something like “This code works at the moment, great, ship it! Ah, there’s a way that this code doesn’t work. Okay, write a test, demonstrate that it doesn’t work, fix it, use the test to demonstrate that it’s now fixed, and keep the test for future regressions.” The most valuable tests are for bugs that have actually happened at some point, because bugs that have actually happened at some point, despite the fact that you think you’ve fixed them, are way more likely to appear again than new bugs are. Does that mean that when you write your code the first time, there are no tests? Often. The chance of there being a bug in a new feature is relatively unaffected by whether I’ve written a test for that new feature because I’m not good enough at writing tests to think of bugs that I would have written into the code. So not writing regression tests for all of your code hasn’t affected you too badly? There are different kinds of features. Some of them just always work, and are just not flaky, they just continue working whatever you throw at them. Maybe because the type-checker is particularly effective around them. Writing tests for those features which just tend to always work is a waste of time. And because it’s a waste of time I’ll tend to wait until a feature has demonstrated its flakiness by having bugs in it before I start trying to test it. You can get a feel for whether it’s going to be flaky code as you’re writing it. I try to write it to make it not flaky, but there are some things that are just inherently flaky. And very occasionally, I’ll think “this is going to be flaky” as I’m writing, and then maybe do a test, but not most of the time. How do you think your programming style has changed over time? I’ve got clearer about what the right way of doing things is. I used to flip-flop a lot between different ideas. Five years ago I came up with some really good ideas and some really terrible ideas. All of them seemed great when I thought of them, but they were quite diverse ideas, whereas now I have a smaller set of reliable ideas that are actually good for structuring code. So my code is probably more similar to itself than it used to be back in the day, when I was trying stuff out. I’ve got more disciplined about encapsulation, I think. There are operational things like I use actors more now than I used to, and that forces me to use immutability more than I used to. The first code that I wrote in Red Gate was the memory profiler UI, and that was an actor, I just didn’t know the name of it at the time. I don’t really use object-orientation. By object-orientation, I mean having n objects of the same type which are mutable. I want a constant number of objects that are mutable, and they should be different types. I stick stuff in dictionaries and then have one thing that owns the dictionary and puts stuff in and out of it. That’s definitely a pattern that I’ve seen recently. I think maybe I’m doing functional programming. Possibly. It’s plausible. If you had to summarise the essence of programming in a pithy sentence, how would you do it? Programming is the form of art that, without losing any of the beauty of architecture or fine art, allows you to produce things that people love and you make money from. So you think it’s an art rather than a science? It’s a little bit of engineering, a smidgeon of maths, but it’s not science. Like architecture, programming is on that boundary between art and engineering. If you want to do it really nicely, it’s mostly art. You can get away with doing architecture and programming entirely by having a good engineering mind, but you’re not going to produce anything nice. You’re not going to have joy doing it if you’re an engineering mind. Architects who are just engineering minds are not going to enjoy their job. I suppose engineering is the foundation on which you build the art. Exactly. How do you think programming is going to change over the next ten years? There will be an unfortunate shift towards dynamically-typed languages, because of JavaScript. JavaScript has an unfair advantage. JavaScript’s unfair advantage will cause more people to be exposed to dynamically-typed languages, which means other dynamically-typed languages crop up and the best features go into dynamically-typed languages. Then people conflate the good features with the fact that it’s dynamically-typed, and more investment goes into dynamically-typed languages. They end up better, so people use them. What about the idea of compiling other languages, possibly statically-typed, to JavaScript? It’s a reasonable idea. I would like to do it, but I don’t think enough people in the world are going to do it to make it pick up. The hordes of beginners are the lifeblood of a language community. They are what makes there be good tools and what makes there be vibrant community websites. And any particular thing which is the same as JavaScript only with extra stuff added to it, although it might be technically great, is not going to have the hordes of beginners. JavaScript is always to be quickest and easiest way for a beginner to start programming in the browser. And dynamically-typed languages are great for beginners. Compilers are pretty scary and beginners don’t write big code. And having your errors come up in the same place, whether they’re statically checkable errors or not, is quite nice for a beginner. If someone asked me to teach them some programming, I’d teach them JavaScript. If dynamically-typed languages are great for beginners, when do you think the benefits of static typing start to kick in? The value of having a statically typed program is in the tools that rely on the static types to produce a smooth IDE experience rather than actually telling me my compile errors. And only once you’re experienced enough a programmer that having a really smooth IDE experience makes a blind bit of difference, does static typing make a blind bit of difference. So it’s not really about size of codebase. If I go and write up a tiny program, I’m still going to get value out of writing it in C# using ReSharper because I’m experienced with C# and ReSharper enough to be able to write code five times faster if I have that help. Any other visions of the future? Nobody’s going to use actors. Because everyone’s going to be running on single-core VMs connected over network-ready protocols like JSON over HTTP. So, parallelism within one operating system is going to die. But until then, you should use actors. More Red Gater Coder interviews

    Read the article

  • Would a programmer knowing C# and VB.Net ever choose VB.Net?

    - by Earlz
    Now before someone tells me VB.Net isn't bad like VB was, I know it isn't. But, I've yet to speak to a programmer who is completely content that some project they work on is written in VB.Net. Basically, my question is would a programmer knowing both C# and VB.Net (and all of their team knowing both), would they ever choose VB.Net? And why? All of the VB.Net projects I've seen were written that way only because the programmer that started it(that usually isn't working there anymore) knew VB6(or earlier) and wrote it in VB.Net because of the similar syntax. Is there any advantage to writing a program in VB.Net compared to C#? (hopefully this is appropriate here, SO rejected it within a few minutes)

    Read the article

  • Is VB Really Case Insensitive?

    - by Otaku
    I'm not trying to start an argument here, but for whatever reason it's typically stated that VB is case insensitive and C languages aren't (and somehow that is a good thing). But here's my question: Where exactly is VB case insensitive? When I type... Dim ss As String Dim SS As String ...into the VS2008 IDE the second one has a warning of "Local variable 'SS' is already declared in the current block". In VBA VBE, it doesn't immediately kick an error, but rather just auto-corrects the case. Am I missing something here with this argument that VB is not case sensitive? (Also, if you know or care to answer, why would that be a bad thing?) EDIT: Why am I even asking this question? I've used VB in many of it's dialects for years now, sometimes as a hobbyist, sometimes for small business-related programs in a workgroup. As of the last 6 months I've been working on a big project, much bigger than I anticipated. Much of the sample source code out there is in C#. I don't have any burning desire to learn C#, but if there are things I'm missing out on that C# offers that VB doesn't (an opposite would be VB.NET offers XML Literals), then I'd like to know more about that feature. So in this case, it's often argued that C languages are case sensitive and that's good and VB is case insensitive and that is bad. I'd like to know A) how exactly is VB case insensitive because every single example in the code editor becomes case sensititive (meaning case gets corrected) whether I want it or not and B) is this compelling enough for me to consider moving to C# if VB.NET case is somehow limiting what I could do with code?

    Read the article

  • Why is VB so popular? [closed]

    - by aaaidan
    To me, Visual Basic seems clumsy, ugly, error-prone, and difficult to read. I'll let others explain why. While VB.net has clearly been a huge leap forward for the language in terms of features, I still don't understand why anyone would choose to code in VB over, say, C#. However, I still see (what seems to be) the vast majority of commercial web apps from "MS shops" are built in VB. I could stand corrected on this, but VB still seems more popular than it deserves. Can anyone help answer any (or all) of these questions: Am I missing something with VB? Is it easier to learn, or "friendlier" than C#? Are there features I don't know about? Why is VB/VB.net so frequently used today, especially in web projects?

    Read the article

  • iPhone Serialization problem

    - by Jenicek
    Hi, I need to save my own created class to file, I found on the internet, that good approach is to use NSKeyedArchiver and NSKeyedUnarchiver My class definition looks like this: @interface Game : NSObject <NSCoding> { NSMutableString *strCompleteWord; NSMutableString *strWordToGuess; NSMutableArray *arGuessedLetters; //This array stores characters NSMutableArray *arGuessedLettersPos; //This array stores CGRects NSInteger iScore; NSInteger iLives; NSInteger iRocksFallen; BOOL bGameCompleted; BOOL bGameOver; } I've implemented methods initWithCoder: and encodeWithCoder: this way: - (id)initWithCoder:(NSCoder *)coder { if([coder allowsKeyedCoding]) { strCompleteWord = [[coder decodeObjectForKey:@"CompletedWord"] copy]; strWordToGuess = [[coder decodeObjectForKey:@"WordToGuess"] copy]; arGuessedLetters = [[coder decodeObjectForKey:@"GuessedLetters"] retain]; // arGuessedLettersPos = [[coder decodeObjectForKey:@"GuessedLettersPos"] retain]; iScore = [coder decodeIntegerForKey:@"Score"]; iLives = [coder decodeIntegerForKey:@"Lives"]; iRocksFallen = [coder decodeIntegerForKey:@"RocksFallen"]; bGameCompleted = [coder decodeBoolForKey:@"GameCompleted"]; bGameOver = [coder decodeBoolForKey:@"GameOver"]; } else { strCompleteWord = [[coder decodeObject] retain]; strWordToGuess = [[coder decodeObject] retain]; arGuessedLetters = [[coder decodeObject] retain]; // arGuessedLettersPos = [[coder decodeObject] retain]; [coder decodeValueOfObjCType:@encode(NSInteger) at:&iScore]; [coder decodeValueOfObjCType:@encode(NSInteger) at:&iLives]; [coder decodeValueOfObjCType:@encode(NSInteger) at:&iRocksFallen]; [coder decodeValueOfObjCType:@encode(BOOL) at:&bGameCompleted]; [coder decodeValueOfObjCType:@encode(BOOL) at:&bGameOver]; } return self; } - (void)encodeWithCoder:(NSCoder *)coder { if([coder allowsKeyedCoding]) { [coder encodeObject:strCompleteWord forKey:@"CompleteWord"]; [coder encodeObject:strWordToGuess forKey:@"WordToGuess"]; [coder encodeObject:arGuessedLetters forKey:@"GuessedLetters"]; //[coder encodeObject:arGuessedLettersPos forKey:@"GuessedLettersPos"]; [coder encodeInteger:iScore forKey:@"Score"]; [coder encodeInteger:iLives forKey:@"Lives"]; [coder encodeInteger:iRocksFallen forKey:@"RocksFallen"]; [coder encodeBool:bGameCompleted forKey:@"GameCompleted"]; [coder encodeBool:bGameOver forKey:@"GameOver"]; } else { [coder encodeObject:strCompleteWord]; [coder encodeObject:strWordToGuess]; [coder encodeObject:arGuessedLetters]; //[coder encodeObject:arGuessedLettersPos]; [coder encodeValueOfObjCType:@encode(NSInteger) at:&iScore]; [coder encodeValueOfObjCType:@encode(NSInteger) at:&iLives]; [coder encodeValueOfObjCType:@encode(NSInteger) at:&iRocksFallen]; [coder encodeValueOfObjCType:@encode(BOOL) at:&bGameCompleted]; [coder encodeValueOfObjCType:@encode(BOOL) at:&bGameOver]; } } And I use these methods to archive and unarchive data: [NSKeyedArchiver archiveRootObject:currentGame toFile:strPath]; Game *currentGame = [NSKeyedUnarchiver unarchiveObjectWithFile:strPath]; I have two problems. 1) As you can see, lines with arGuessedLettersPos is commented, it's because every time I try to encode this array, error comes up(this archiver cannot encode structs), and this array is used for storing CGRect structs. I've seen solution on the internet. The thing is, that every CGRect in the array is converted to an NSString (using NSStringFromCGRect()) and then saved. Is it a good approach? 2)This is bigger problem for me. Even if I comment this line and then run the code successfully, then save(archive) the data and then try to load (unarchive) them, no data is loaded. There aren't any error but currentGame object does not have data that should be loaded. Could you please give me some advice? This is first time I'm using archivers and unarchivers. Thanks a lot for every reply.

    Read the article

  • jQuery's next() on elements not next to each other

    - by Rio
    I have bits of horrible code I have to deal with ... <div class="container"> ... <tr> <td width="100" height="50"> <a class="swaps"><img src="http://www.blah.jpg" alt="Some Title" id="1"></a></span></td> </tr> <tr> <td width="100" height="50"> <a class="swaps"><img src="http://www.blah2.jpg" alt="Another title" id="2"></a></span></td> </tr> </div> If I use var thisone = $("#container .swaps:first") to select the first one (with id 1) why do I have trouble selecting thisone.next()?

    Read the article

  • Next Best Action: an emerging engagement paradigm can elevate customer experience to the next level

    - by Richard Lefebvre
    As customer interactions increase across an expanding number of communication channels, business leaders are struggling to understand and engage with each customer effectively. To address this challenge, leading organizations are adopting strategies around “next best action,” a decision-support model that systematically identifies the next best step to take in the customer conversation—whether that action is providing additional information or targeted services, presenting a unique offer, or taking no action at all... Read the complete article - by Mark A. Stevens (vice president, Insight and Customer Strategy, at Oracle) - here

    Read the article

  • Implicit casting in VB.NET

    - by Shimmy
    The question is intended for lazy VB programmers. Please. In vb I can do and I won't get any errors. Example 1 Dim x As String = 5 Dim y As Integer = "5" Dim b As Boolean = "True" Example 2 Dim a As EnumType = 4 Dim v As Integer = EnumType.EnumValue Example 3 Private Sub ButtonClick(sender As Object, e As EventArgs) Dim btn As Button = sender End Sub Example 4 Private Sub ButtonClick(sender As Button, e As EventArgs) Dim data As Contact = sender.Tag End Sub If I surely know the expected runtime type, is this 'forbidden' to rely on the vb-language built-in casting? When can I rely? Thanks

    Read the article

  • how to group by over anonymous type with vb.net linq to object

    - by smoothdeveloper
    I'm trying to write a linq to object query in vb.net, here is the c# version of what I'm trying to achieve (I'm running this in linqpad): void Main() { var items = GetArray( new {a="a",b="a",c=1} , new {a="a",b="a",c=2} , new {a="a",b="b",c=1} ); ( from i in items group i by new {i.a, i.b} into g let p = new{ k = g, v = g.Sum((i)=>i.c)} where p.v > 1 select p ).Dump(); } // because vb.net doesn't support anonymous type array initializer, it will ease the translation T[] GetArray<T>(params T[] values){ return values; } I'm having hard time with either the group by syntax which is not the same (vb require 'identifier = expression' at some places, as well as with the summing functor with 'expression required' ) Thanks so much for your help!

    Read the article

  • Constructing an object and calling a method without assignment in VB.Net

    - by mdryden
    I'm not entirely sure what to call what C# does, so I haven't had any luck searching for the VB.Net equivalent syntax (if it exists, which I suspect it probably doesn't). In c#, you can do this: public void DoSomething() { new MyHelper().DoIt(); // works just fine } But as far as I can tell, in VB.Net, you must assign the helper object to a local variable or you just get a syntax error: Public Sub DoSomething() New MyHelper().DoIt() ' won't compile End Sub Just one of those curiosity things I run into from day to day working on mixed language projects - often there is a VB.Net equivalent which uses less than obvious syntax. Anyone?

    Read the article

  • Rewriting VB.NET Lambda experession as a C# statement

    - by ChadD
    I downgraded my app from version 4 of the framework to version 4 and now I want to implement this VB.NET lambda function statement (which works on 3.5) Dim colLambda As ColumnItemValueAccessor = Function(rowItem As Object) General_ItemValueAccessor(rowItem, colName) and rewrite it in C#. This was my attempt: ColumnItemValueAccessor colLambda = (object rowItem) => General_ItemValueAccessor(rowItem, colName); When I did this, I get the following error: Error 14 One or more types required to compile a dynamic expression cannot be found. Are you missing references to Microsoft.CSharp.dll and System.Core.dll? C:\Source\DotNet\SqlSmoke\SqlSmoke\UserControls\ScriptUserControl.cs 84 73 SqlSmoke However, when I downgraded the app from version 4.0 of the framework to 3.5 (because our users only hae 3.5 and don't have rights to install 4.0). when I did this, the reference to "Microsoft.CSharp" was broken. Can I rewrite the VB.NET command in C# using syntax that is valid in C# 3.5 as I was able to in VB.NET? What would the syntax be?

    Read the article

  • Basic Form Properties and Modality in VB.NET

    Creating your First VB.NET Form 1. Launch Microsoft Visual Basic 2008 Express Edition. If you do not have this program, then you cannot create VB.NET forms. You can read an introductory tutorial on how to install Visual Basic on your computer: http://www.aspfree.com/c/a/VB.NET/Visual-Basic-for-Beginners/ 2. Go to File - gt; New Project. 3. Since you will be creating a form, select Windows Forms Application. 4. Select a name for your form project, e.g. MyFirstForm. 5. Hit OK to get started. 6. You will then see an empty form -- just like an empty canvas when you paint. It looks like th...

    Read the article

  • The next next C++ [closed]

    - by Roger Pate
    It's entirely too early for speculation on what C++ will be like after C++0x, but idle hands make for wild predictions. What features would you find useful and why? Is there anything in another language that would fit nicely into the state of C++ after 0x? What should be considered for the next TC and TR? (Mostly TR, as the TC would depend more on what actually becomes the next standard.) Export was removed, rather than merely deprecated, in 0x. (It remains a keyword.) What other features carry so much baggage to also be more harmful than helpful? ISO Standards' process I'm not involved in the C++ committee, but it's also a mystery, unfortunately, to most programmers using C++. A few things worth keeping in mind: There will be 10 years between standards, barring extremely exceptional circumstances. The standard can get "bug fixes" in the form of a Technical Corrigendum. This happened to C++98 with TC1, named C++03. It fixed "simple" issues such as making the explicit guarantee that std::vector stores items contiguously, which was always intended. The committee can issue reports which can add to the language. This happened to C++98/03 with TR1 in 2005, which introduced the std::tr1 namespace.

    Read the article

  • how to get Geo::Coder::Many with cpan?

    - by mnemonic
    Ubuntu is installed for development of a Perl project. aptitude search Geo-Coder i libgeo-coder-googlev3-perl - Perl module providing access to Google Map Aptitude does not refer to Geo::Coder::Many cpan can not build it. sudo cpan Geo::Coder::Many Then: CPAN: Storable loaded ok (v2.27) Going to read '/home/jh/.cpan/Metadata' Database was generated on Wed, 16 Oct 2013 06:17:04 GMT Running install for module 'Geo::Coder::Many' Running make for K/KA/KAORU/Geo-Coder-Many-0.42.tar.gz CPAN: Digest::SHA loaded ok (v5.61) CPAN: Compress::Zlib loaded ok (v2.033) Checksum for /home/jh/.cpan/sources/authors/id/K/KA/KAORU/Geo-Coder-Many-0.42.tar.gz ok CPAN: File::Temp loaded ok (v0.22) CPAN: Parse::CPAN::Meta loaded ok (v1.4401) CPAN: CPAN::Meta loaded ok (v2.110440) CPAN: Module::CoreList loaded ok (v2.49_02) CPAN: Module::Build loaded ok (v0.38) CPAN.pm: Going to build K/KA/KAORU/Geo-Coder-Many-0.42.tar.gz Can't locate Geo/Coder/Many/Google.pm in @INC (@INC contains: /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl .) at /usr/share/perl/5.14/Module/Load.pm line 27. Can't locate Geo/Coder/Many/Google in @INC (@INC contains: /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl .) at /usr/share/perl/5.14/Module/Load.pm line 27. BEGIN failed--compilation aborted at Build.PL line 54. Warning: No success on command[/usr/bin/perl Build.PL --installdirs site] CPAN: YAML loaded ok (v0.77) KAORU/Geo-Coder-Many-0.42.tar.gz /usr/bin/perl Build.PL --installdirs site -- NOT OK Running Build test Make had some problems, won't test Running Build install Make had some problems, won't install Could not read metadata file. Falling back to other methods to determine prerequisites Any suggestions how to resolve this issue?

    Read the article

  • wait for the second VB script untill ended from primary VB script

    - by yael
    Hi I have VB script that run second VB script The second VB script ask some questions from the input box My problem is that “MyShell.Run” not wait until SecondVBscript.vbs will ended And the Other VB syntax run immodestly also Need to wait for MyShell.Run process ended and then perform the Other VB syntax How can I do that? Set MyShell = Wscript.CreateObject("WScript.Shell") MyShell.Run " C:\Program Files\SecondVBscript.vbs" Set MyShell = Nothing Other VB syntax

    Read the article

  • VB.NET How Do I skin a vb.net app

    - by letocypher
    I want to skin a vb.net app I made ive googled some stuff and I've seen skinned vb.net apps. However it seems like any time i try to find someone explaining it its a link to a pay for product. Does anyone have anything useful on this?

    Read the article

  • Do I lose anything by coding in c# and using free online vb.net code convertors?

    - by Gullu
    The company I work for uses vb.net since there are many programmers who moved up from vb6 to vb.net. Basically more vb.net resources in the company for support/maintenance vs c#. I am a c# coder and was wondering if I could just continue coding in c# and just use the many online free c# to vb.net code convertors. That way, I will be more productive and also be more marketable since there are more c# jobs compared to vb.net jobs. I have done vb6 many years ago and I am comfortable debugging vb.net code. It's just the primary coding language. I am more comfortable in c#. Will I lose anything if I use this approach. (code conversion). Based on what i read online the future of vb.net is really "Dim". Please advise. thank you

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >