Search Results

Search found 9413 results on 377 pages for 'built in'.

Page 61/377 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Virtualized data centre&ndash;Part three: Architecture

    - by marc dekeyser
    Having the basics (like discussed in the previous articles) is all good and well, but how do we get started on this?! It can be quite daunting after all!   From my own point of view I can absolutely confirm your worries and concerns, but also tell you that it is not as hard as it seems! Deciding on what kind of motherboard to buy, processor and how much memory is an activity you will spend quite some time doing research on. And that is not even mentioning storage! All in all it comes down to setting you expectations and your budget. Probably adjusting your expectations according to your budget :). Processors As a rule of thumb you want VT-D (virtualization) technology built in to the processor allowing you to have 64 bit machines running on your host. Memory The more the better! If you are building a home lab don’t bother with ECC unless you are going to run machines that absolutely should be on all the time and your comfort depends on it! Motherboard Depends on what you are going to do with storage: If you are going the NAS way then the number of SATA port/RAID capabilities do not really matter. If you decide to have a single server with lots of dedicated storage it obviously matters how much SATA ports you will have, alternatively you could use a RAID controller (but these set you back a pretty penny if you want one. DELL 6i’s are usually available for a good bargain if you can find one!). Easiest is to get one with a built-in graphics card (on-board) as you are just adding more heat, power usage and possible points of failure. Networking Just like your choice of motherboard the networking side tends to depend on how you want to go. A single virtualization  host with local storage can usually get away with having a single network card, a cluster or server which uses iSCSI storage tends to have more than one teamed up :). Storage The dreaded beast from the dark! The horror which lives in the forest! The most difficult decision you are going to make in the building of your lab. Why you might ask? Simple my friend, having the right choice of storage can make or break your virtualization solution. The performance of you storage choice will have an important impact on the responsiveness of your virtual machines and the deployment of new machines. It also makes a run with your budget! If you decide to go the NAS route you will be dropping a lot more money than if you would be having just a bunch of disks sitting in a server and manually distributing the virtual machines over the disks. Platform I’m a Microsoftee so Hyper-V is a dead giveaway for me. If you are interested in using VMware I won’t stop you but the rest of my posts will be oriented on Server 2012 Hyper-V (aka 3.0)! What did I use? Before someone asks me this in the comments I’ll give you a quick run down of what I am using. - Intel 2.4 quad core processors (i something something) - 24 GB DDR3 Memory - Single disk in each server (might look at this as I move the servers to 2012) - Synology DS1812+ NAS - 3 network interfaces where possible - HP1800 procurve managed switch I decided to spring for the NAS as I will also be using it for backups and media storage (which is working out quite nicely with my Xbox 360 I must say). At the time of building my 2 boxes (over a year and a half ago) these set me back about 900 euros each so I can image you can build the same or better for a lower price. Next article will be diagramming what I want to achieve and starting a build on the Hyper V 3.0 cluster!

    Read the article

  • Fetching Partition Information

    - by Mike Femenella
    For a recent SSIS package at work I needed to determine the distinct values in a partition, the number of rows in each partition and the file group name on which each partition resided in order to come up with a grouping mechanism. Of course sys.partitions comes to mind for some of that but there are a few other tables you need to link to in order to grab the information required. The table I’m working on contains 8.8 billion rows. Finding the distinct partition keys from this table was not a fast operation. My original solution was to create  a temporary table, grab the distinct values for the partitioned column, then update via sys.partitions for the rows and the $partition function for the partitionid and finally look back to the sys.filegroups table for the filegroup names. It wasn’t pretty, it could take up to 15 minutes to return the results. The primary issue is pulling distinct values from the table. Queries for distinct against 8.8 billion rows don’t go quickly. A few beers into a conversation with a friend and we ended up talking about work which led to a conversation about the task described above. The solution was already built in SQL Server, just needed to pull it together. The first table I needed was sys.partition_range_values. This contains one row for each range boundary value for a partition function. In my case I have a partition function which uses dayid values. For example July 4th would be represented as an int, 20130704. This table lists out all of the dayid values which were defined in the function. This eliminated the need to query my source table for distinct dayid values, everything I needed was already built in here for me. The only caveat was that in my SSIS package I needed to create a bucket for any dayid values that were out of bounds for my function. For example if my function handled 20130501 through 20130704 and I had day values of 20130401 or 20130705 in my table, these would not be listed in sys.partition_range_values. I just created an “everything else” bucket in my ssis package just in case I had any dayid values unaccounted for. To get the number of rows for a partition is very easy. The sys.partitions table contains values for each partition. Easy enough to achieve by querying for the object_id and index value of 1 (the clustered index) The final piece of information was the filegroup name. There are 2 options available to get the filegroup name, sys.data_spaces or sys.filegroups. For my query I chose sys.filegroups but really it’s a matter of preference and data needs. In order to bridge between sys.partitions table and either sys.data_spaces or sys.filegroups you need to get the container_id. This can be done by joining sys.allocation_units.container_id to the sys.partitions.hobt_id. sys.allocation_units contains the field data_space_id which then lets you join in either sys.data_spaces or sys.file_groups. The end result is the query below, which typically executes for me in under 1 second. I’ve included the join to sys.filegroups and to sys.dataspaces, and I’ve  just commented out the join sys.filegroups. As I mentioned above, this shaves a good 10-15 minutes off of my original ssis package and is a really easy tweak to get a boost in my ETL time. Enjoy.

    Read the article

  • Ops Center 12c - Update - Provisioning Solaris on x86 Using a Card-Based NIC

    - by scottdickson
    Last week, I posted a blog describing how to use Ops Center to provision Solaris over the network via a NIC on a card rather than the built-in NIC.  Really, that was all about how to install Solaris on a SPARC system.  This week, we'll look at how to do the same thing for an x86-based server. Really, the overall process is exactly the same, at least for Solaris 11, with only minor updates. We will focus on Solaris 11 for this blog.  Once I verify that the same approach works for Solaris 10, I will provide another update. Booting Solaris 11 on x86 Just as before, in order to configure the server for network boot across a card-based NIC, it is necessary to declare the asset to associate the additional MACs with the server.  You likely will need to access the server console via the ILOM to figure out the MAC and to get a good idea of the network instance number.  The simplest way to find both of these is to start a network boot using the desired NIC and see where it appears in the list of network interfaces and what MAC is used when it tries to boot.  Go to the ILOM for the server.  Reset the server and start the console.  When the BIOS loads, select the boot menu, usually with Ctrl-P.  This will give you a menu of devices to boot from, including all of the NICs.  Select the NIC you want to boot from.  Its position in the list is a good indication of what network number Solaris will give the device. In this case, we want to boot from the 5th interface (GB_4, net4).  Pick it and start the boot processes.  When it starts to boot, you will see the MAC address for the interface Once you have the network instance and the MAC, go through the same process of declaring the asset as in the SPARC case.  This associates the additional network interface with the server.. Creating an OS Provisioning Plan The simplest way to do the boot via an alternate interface on an x86 system is to do a manual boot.  Update the OS provisioning profile as in the SPARC case to reflect the fact that we are booting from a different interface.  Update, in this case, the network boot device to be GB_4/net4, or the device corresponding to your network instance number.  Configure the profile to support manual network boot by checking the box for manual boot in the OS Provisioning profile. Booting the System Once you have created a profile and plan to support booting from the additional NIC, we are ready to install the server. Again, from the ILOM, reset the system and start the console.  When the BIOS loads, select boot from the Boot Menu as above.  Select the network interface from the list as before and start the boot process.  When the grub bootloader loads, the default boot image is the Solaris Text Installer.  On the grub menu, select Automated Installer and Ops Center takes over from there. Lessons The key lesson from all of this is that Ops Center is a valuable tool for provisioning servers whether they are connected via built-in network interfaces or via high-speed NICs on cards.  This is great news for modern datacenters using converged network infrastructures.  The process works for both SPARC and x86 Solaris installations.  And it's easy and repeatable.

    Read the article

  • Intermittent internet connectivity

    - by Rob Oplawar
    UPDATED: I recently built a new computer and set it up to dual-boot Windows 7 and Ubuntu 11.10. In Windows, using the same hardware, my LAN connectivity is solid. In Ubuntu, however, my network interface periodically dies and resets itself; I'll have a solid connection for 30 seconds, and then it will go out for 30 seconds. When I tail the log: tail -f /var/log/kern.log I see "eth0 link up" messages appear periodically, corresponding with the return of connectivity. I posted the original question months ago, and misinterpreted what was going on. With a working Internet connection in Windows, I ignored the problem for some months. See my answer below for the solution (drivers). ORIGINAL POST In Ubuntu, although I maintain a solid connection to my LAN (pinging the router IP address consistently returns a good result), my internet connectivity drops in and out. When I continuously ping 74.125.227.18 (a google.com server), I get responses for a while, then I start getting "Destination Host Unreachable" for a while, then I get responses again. This happens consistently, dropping the connection for about 30 seconds out of every minute or two. Whether I configure my network via the network manager or via /etc/network/interfaces seems to make no difference. I configure with the following settings: address 192.168.1.101 network 192.168.1.0 gateway 192.168.1.99 (my router's IP address) netmask 255.255.255.0 (confirmed as the right netmask for the router) broadcast 192.168.1.255 (also confirmed with the router). ifconfig confirms that these settings are working: eth0 Link encap:Ethernet HWaddr 50:e5:49:40:da:a6 inet addr:192.168.1.101 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::52e5:49ff:fe40:daa6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11557 errors:0 dropped:11557 overruns:0 frame:11557 TX packets:13117 errors:0 dropped:211 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:9551488 (9.5 MB) TX bytes:1930952 (1.9 MB) Interrupt:41 Base address:0xa000 I get the same issue when I use automatic DHCP address settings, although I did confirm that there is no other machine on the network with the static IP address I want to use. As I said, the connection to the local network stays solid - I never have any trouble pinging 192.168.1.* - it's internet addresses that I intermittently cannot reach. It's not a DNS issue because pinging known IP addresses directly shows the same behavior. Also, I don't think it's a hardware issue, as I never have any internet connectivity problems on the same machine in Windows. The network hardware is built into the motherboard: Gigabyte Z68XP-UD3P. I managed to bring the OS fully up to date, according to the update manager, but it didn't fix the issue, and with my limited understanding of network architecture I'm at my wit's end. The only clue I can see is that ifconfig is reporting a lot of dropped packets, but I'm not sure what to do about it. UPDATE: It seems my problem is a little more generic than I described; now when I try pinging my router and google simultaneously, they both go unreachable at the same time. Running ifdown eth0 and then ifup eth0 brings it back temporarily; if I just wait it comes back after a couple of minutes. I'll broaden my search through intermittent network connectivity problems.

    Read the article

  • How much am I worth hourly as a software/web developer? [closed]

    - by luckysmack
    I may be starting a new job very soon as a developer for both web and desktop software. The primary languages I will be using is ASP.NET with C# with some php for existing projects(I've already had one interview which went very well). The job deals primarily in advertising. But this is my first real job in the market, I have no degrees, but have some college time(~1yr). So I am primarily self taught. They are fully aware of my skill set and lack of degrees or certificates. I applied as an entry level developer. It will be a permanent and full time/hourly position, and not a per contract job. So since it my cherry job, im not really sure what to ask for. even though im self taught im pretty confident in my skills and know what im doing fairly well. I pick up on new concepts very well and find new things fairly easy to learn. Here is a very brief summary of my skills: PHP: ~2years C#/.NET: 2 months Python: Basics only. ~1 month OOP Familiarity: Great (1 year) MVC Familiarity: Great (1 year) PHP Frameworks used: CakePHP(6 months), Yii(3 months), Lithium(3 months) CMS Familiar with: Drupal(1.5 years), Wordpress(only basics) I also have ~2yrs experience in maintaining my own VPS server and the hassles all that entails (linux/debian) Pretty much all the above will be used at this job. Although I will be using C# a vast majority of the time. I only recently started learning it but am moving along fairly rapidly and its all going smooth as butter. So what have I built? I have one proprietary site built in drupal which is used an an order log for products, inventory, and their shipments. It is also able to process payments through paypal merchant services. I have worked on a handful of other small apps used here and there I'm not able to show but which worked fairly well (all in php using frameworks though). The business does fairly well and is far from a a typical corporate type environment. It is much closer to a small development studio. And it is based out of northern California. I don't know how/what more info I can give on them. I also want this to be able to be referenced by other people possibly so I am looking for general tips and ideas to get an answer as well. I had trouble finding a reasonable range on other websites which seemed to be either way to low, or showed what a veteran developer makes. I know this is a fairly subjective question, but it is difficult to get a reasonable answer or guesstimate anywhere else. Even if only a little bit help, its much appreciated. So as for the direct question, based on all this info (did I miss anything?), how much should I ask for hourly? How much am I worth as a software developer?

    Read the article

  • I thought the new AUTO_SAMPLE_SIZE in Oracle Database 11g looked at all the rows in a table so why do I see a very small sample size on some tables?

    - by Maria Colgan
    I recently got asked this question and thought it was worth a quick blog post to explain in a little more detail what is going on with the new AUTO_SAMPLE_SIZE in Oracle Database 11g and what you should expect to see in the dictionary views. Let’s take the SH.CUSTOMERS table as an example.  There are 55,500 rows in the SH.CUSTOMERS tables. If we gather statistics on the SH.CUSTOMERS using the new AUTO_SAMPLE_SIZE but without collecting histogram we can check what sample size was used by looking in the USER_TABLES and USER_TAB_COL_STATISTICS dictionary views. The sample sized shown in the USER_TABLES is 55,500 rows or the entire table as expected. In USER_TAB_COL_STATISTICS most columns show 55,500 rows as the sample size except for four columns (CUST_SRC_ID, CUST_EFF_TO, CUST_MARTIAL_STATUS, CUST_INCOME_LEVEL ). The CUST_SRC_ID and CUST_EFF_TO columns have no sample size listed because there are only NULL values in these columns and the statistics gathering procedure skips NULL values. The CUST_MARTIAL_STATUS (38,072) and the CUST_INCOME_LEVEL (55,459) columns show less than 55,500 rows as their sample size because of the presence of NULL values in these columns. In the SH.CUSTOMERS table 17,428 rows have a NULL as the value for CUST_MARTIAL_STATUS column (17428+38072 = 55500), while 41 rows have a NULL values for the CUST_INCOME_LEVEL column (41+55459 = 55500). So we can confirm that the new AUTO_SAMPLE_SIZE algorithm will use all non-NULL values when gathering basic table and column level statistics. Now we have clear understanding of what sample size to expect lets include histogram creation as part of the statistics gathering. Again we can look in the USER_TABLES and USER_TAB_COL_STATISTICS dictionary views to find the sample size used. The sample size seen in USER_TABLES is 55,500 rows but if we look at the column statistics we see that it is same as in previous case except  for columns  CUST_POSTAL_CODE and  CUST_CITY_ID. You will also notice that these columns now have histograms created on them. The sample size shown for these columns is not the sample size used to gather the basic column statistics. AUTO_SAMPLE_SIZE still uses all the rows in the table - the NULL rows to gather the basic column statistics (55,500 rows in this case). The size shown is the sample size used to create the histogram on the column. When we create a histogram we try to build it on a sample that has approximately 5,500 non-null values for the column.  Typically all of the histograms required for a table are built from the same sample. In our example the histograms created on CUST_POSTAL_CODE and the CUST_CITY_ID were built on a single sample of ~5,500 (5,450 rows) as these columns contained only non-null values. However, if one or more of the columns that requires a histogram has null values then the sample size maybe increased in order to achieve a sample of 5,500 non-null values for those columns. n addition, if the difference between the number of nulls in the columns varies greatly, we may create multiple samples, one for the columns that have a low number of null values and one for the columns with a high number of null values.  This scheme enables us to get close to 5,500 non-null values for each column. +Maria Colgan

    Read the article

  • Building a VS2010 solution from TFS2008

    - by slugster
    I have a TFS 2008 Build Agent that has been used to build .Net 3.5 applications. I now have a .Net 4.0 app which i want to compile on the same build agent. I have ensured that MSBuild 4.0 is installed on there and all the required componentry is also installed, but i am getting the following MSB4062 error when building: [Any CPU/Release] C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets(244,5): error MSB4062: The "Microsoft.WebApplication.Build.Tasks.GetSilverlightItemsFromProperty" task could not be loaded from the assembly C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.Build.Tasks.dll. Could not load file or assembly 'file:///C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.Build.Tasks.dll' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded. Confirm that the declaration is correct, and that the assembly and all its dependencies are available. I am presuming that i get this because the TFSBuild.proj gets executed by MSBuild 3.5 which in turn means my solution is compiled with MSBuild 3.5. Am i correct with my diagnosis? Is there any way to ensure that TFS2008 uses MSBuild 4.0 for my solution? Can it be done on a single team project so that it doesn't affect any other team projects being built on the same build agent? Note that i have checked the question Build failing - VS2010 solution on TFS2008 and this is not a duplicate. Thanks :)

    Read the article

  • Invalid byte 1 of 1-byte UTF-8 sequence

    - by user275886
    I have a MyFaces Facelets application, where the page coding is a bit rugged. Anyway, it's developed with Eclipse and built with Ant, and kindof runs ok in Tomcat 2.0.26. So far so good. Now, I'd rather build with Maven, so I made a couple of pom-files, opened them in Netbeans and built, and now I have a war file that deploys ok. However, on any facelet page it barfs out with com.sun.org.apache.xerces.internal.impl.io.MalformedByteSequenceException: Invalid byte 1 of 1-byte UTF-8 sequence. at com.sun.org.apache.xerces.internal.impl.io.UTF8Reader.invalidByte(UTF8Reader.java:684) at com.sun.org.apache.xerces.internal.impl.io.UTF8Reader.read(UTF8Reader.java:554) at com.sun.org.apache.xerces.internal.impl.XMLEntityScanner.load(XMLEntityScanner.java:1742) So, I've tried a lot of different things, and the application actually run simple pages without facelet stuff. But, everything runs if I just build with Ant instead ... So my question is: What's the most likely difference between an ant build and a maven build that may cause this? It also seems that even though I've configured for UTF-8 in Netbeans and pom-files, Netbeans eventually ends up reporting the facelet files as ISO-8859-1 after some editing. I've made sure that most central libs are of same version (especially xerces 2.3.0), I've added an encoding servlet filter that had no effect. And, I'd rather fix the maven build and keep the buggy pages, than the other way around ... it's my intention to introduce Naven, not fix buggy pages.

    Read the article

  • Compiling scipy on Windows 32-bit: linker error with libf77blas.a

    - by Sridhar Ratnakumar
    Has anyone tried compiling SciPy 0.7.1 on Windows using numpy-1.3.0 that was built with the pre-built ATLAS libraries (atlas3.6.0_WinNT_P4SSE2.zip) linked in the installation document. I get the following linker error, and have no ideas as to how to fix this issue. $ python setup.py config --compiler=mingw32 build --compiler=mingw32 install --root=i [...] creating build\temp.win32-2.6\Release creating build\temp.win32-2.6\Release\scipy creating build\temp.win32-2.6\Release\scipy\integrate compile options: '-DNO_ATLAS_INFO=2 -I"C:\Documents and Settings\apy\Application Data\Python\Python26\site-packages\numpy\core\inc lude" -IC:\Python26\include -IC:\Python26\PC -c' gcc -mno-cygwin -O2 -Wall -Wstrict-prototypes -DNO_ATLAS_INFO=2 -I"C:\Documents and Settings\apy\Application Data\Python\Python26\ site-packages\numpy\core\include" -IC:\Python26\include -IC:\Python26\PC -c scipy\integrate\_odepackmo dule.c -o build\temp.win32-2.6\Release\scipy\integrate\_odepackmodule.o C:\MinGW\bin\g77.exe -g -Wall -mno-cygwin -g -Wall -mno-cygwin -shared build\temp.win32-2.6\Release\scipy\integrate\_odepackmodule .o -LC:\atlas3.6.0_WinNT_P4SSE2 -LC:\MinGW\lib -LC:\MinGW\lib\gcc\mingw32\3.4.5 -LC:\Python26\libs -LC:\Act ivePython32Python26\PCbuild -Lbuild\temp.win32-2.6 -lodepack -llinpack_lite -lmach -latlas -lcblas -lf77blas -llapack -lpython26 - lg2c -o build\lib.win32-2.6\scipy\integrate\_odepack.pyd C:\atlas3.6.0_WinNT_P4SSE2/libf77blas.a(ATL_F77wrap_daxpy.o):ATL_F77wrap_axpy.c:(.text+0x3c): undefined reference to `ATL _daxpy' C:\atlas3.6.0_WinNT_P4SSE2/libf77blas.a(ATL_F77wrap_dscal.o):ATL_F77wrap_scal.c:(.text+0x26): undefined reference to `ATL _dscal' C:\atlas3.6.0_WinNT_P4SSE2/libf77blas.a(ATL_F77wrap_dcopy.o):ATL_F77wrap_copy.c:(.text+0x3d): undefined reference to `ATL _dcopy' C:\atlas3.6.0_WinNT_P4SSE2/libf77blas.a(ATL_F77wrap_idamax.o):ATL_F77wrap_amax.c:(.text+0x1e): undefined reference to `AT L_idamax' C:\atlas3.6.0_WinNT_P4SSE2/libf77blas.a(ATL_F77wrap_ddot.o):ATL_F77wrap_dot.c:(.text+0x36): undefined reference to `ATL_d dot' collect2: ld returned 1 exit status error: Command "C:\MinGW\bin\g77.exe -g -Wall -mno-cygwin -g -Wall -mno-cygwin -shared build\temp.win32-2.6\Release\scipy\integrat e\_odepackmodule.o -LC:\atlas3.6.0_WinNT_P4SSE2 -LC:\MinGW\lib -LC:\MinGW\lib\gcc\mingw32\3.4.5 -LC:\Python 26\libs -LC:\Python26\PCbuild -Lbuild\temp.win32-2.6 -lodepack -llinpack_lite -lmach -latlas -lcblas -lf77blas -llap ack -lpython26 -lg2c -o build\lib.win32-2.6\scipy\integrate\_odepack.pyd" failed with exit status 1 Does anyone know what could have gone wrong here? Looking for ATL_daxpy, for example, in libf77blas.a resulted in: $ strings libf77blas.a | grep -i daxpy _daxpy_ _atl_f77wrap_daxpy_ ATL_F77wrap_daxpy.o/ daxpy.o/ 1081731936 1003 513 100755 420 ` daxpy.f _daxpy_ _atl_f77wrap_daxpy_ _atl_f77wrap_daxpy_ _ATL_daxpy There is _ATL_daxpy, but no ATL_daxpy.

    Read the article

  • C# Assembly.GetTypes() - ReflectionTypeLoadException

    - by Veskechky
    We implement a plugin framework for our application and load plugin assemblies using Assembly.Loadfrom. We then use GetTypes() and further examine the types with each plugin file for supported Interfaces. A path for the plugins is provided by the user and we cycle through each of the files in the folder to see if it (the plugin) supports our plugin interface. If it does, we create an instance, if not we move onto the next file. We build two versions of software from the one code base (appA_1 and appA_2). Loading the plugins works well when the plugins are loaded by the application that was built at the same time as the plugin file. However if we build appA_2 and point to the plugin folder of appA_1, we get an exception when GetTypes() is called. A basic version of our code is; var pluginAssembly = Assembly.LoadFrom(FileName); foreach (var pluginType in pluginAssembly.GetTypes()) { We get a "ReflectionTypeLoadException" exception. This is concerning because we want our application to be able to load the types of any plugin, built by anyone. Is there something we are missing?

    Read the article

  • Another BC30002: Type is not defined.

    - by brett
    Here's what I've done... I used wsdl.exe to create a .cs class for my wsdl service connection. I made a Visual Studio project to compile the .cs into a dll having namespace CalculatorService (CalculatorService.dll). Successful thus far. I created an asp.net project added my namespace import: %@ Import Namespace="CalculatorService" % I right-clicked on the project, clicked Add Reference, found my .dll, added it, built the project, checked /bin to ensure my dll was there (and it was). % 'I called the namespace:' Dim calcService As New CalculatorService.CalculatorService() 'called the function from the service' Dim xmlResult = calcService.GetSVS_ItemTable_XML("", "", "", "", "", "") 'printed the result' Response.Write(xmlResult) % All is well LOCALLY while debugging. It found the CalculatorService, connected to it, got the XML and displayed it. I then wanted to put it on the web so I built and published my project: under "Copy" - Only files needed to run this application...selected! Deploying on the web says Type 'CalculatorService.CalculatorService' is not defined. Here is a link to the live script: http://vansmith.com/_iaps.wsdl/pub/Default.aspx Any ideas?

    Read the article

  • which language to use for building web application?

    - by harshit
    Hi I already have experience in developing websites using java technologies ... Now i have a task to develop another website and i have the liberty to select technology to built. I dont want to built using Java/J2ee standard technology as i want to learn new language. The specification about website i can give is that: 1) its a real estate based site. 2) so it will have a db of real estate data around million records 3) website will have more than 1000 hits /day and will have various functionality like search, add , delete,generate reports etc. So i mean UI should be good and fast. Technologies i have in mind .NEt( i have already worked on it but it licensed so may not go for it) , Groovy, Ruby on rails ,Play, GWT etc ... I am a college student and the website is again of a student(non techie guy) so i have 5-6 mnths to bring the website up I have read about them but all have adv and disadv but would like to hear from people who have used it and can tell me what they felt about the languages and problems while developing it.. Please feel free to drop any opinion you feel . Thanks

    Read the article

  • Complete list of tools and technologies that make up a solid ASP.NET MVC 2 development environment f

    - by Dr Dork
    This question is related to another wiki I found on SO, but I'd like to develop a more comprehensive example of an automated ASP MVC 2 development environment that can be used to develop and deploy a wide range of small-scale websites by beginners. As far as characteristics of the dev environment go, I'd like to focus on beginner-friendly over powerful since the other wiki focuses more on advanced, powerful setups. This information is targeted for beginners (that already know C# and understand web dev concepts) that have selected... ASP.NET MVC 2 as their dev framework Visual Studio 2010 Pro (or 2008 Pro SP1) as their IDE Windows 7 as their OS and are looking for a quick and easy-to-setup environment that covers managing, building, testing, tracking, and deploying their website with as much automation as possible. A system that can be used for becoming familiar with the whole process, as well as a launching point for exploring other, more custom and powerful systems. Since we've already selected the Compiler, Framework, and OS, I'd like to develop ideas for... Code editor (unless you feel VS will suffice for all areas of code) Database and related tools Unit testing (VS?) Continuous integration build system (VS?) Project Planning Issue tracking Deployment (VS?) Source management (VS?) ASP, C#, VS, and related blogs that beginners can follow Any other categories I'm probably missing Since we're already using Visual Studio, I'd like to focus on the out-of-the-box solutions and features built into Visual Studio, unless you feel there are better solutions that work well with VS and are easier to use than the features built directly into VS. Thanks so much in advance for your wisdom!

    Read the article

  • Enable DOM Access for Silverlight Web Part in Sharepoint 2010

    - by Bhaskar
    I am hosting a silverlight 3.0 control on my Sharepoint 2010 page. I am using the built-in SilverlightWebPart web part, where I have provided the path for .xap file. Its displaying properly, but when I try to access the System.Windows.Browser, its throwing an error. My code is: public static string GetQueryString(string key) { try { var documentQueryString = (Dictionary<string, string>)System.Windows.Browser.HtmlPage.Document.QueryString; if (documentQueryString.ContainsKey(key)) { return documentQueryString[key].ToString(); } } catch (Exception ex) { return ex.Message; } return string.Empty; } The error I am getting is: The DOM/scripting bridge is disabled. How do I enable this? I know if I host this in a ASP.NET page, I can add the param - <param name="enablehtmlaccess" value="true"/>. I have tried putting this webpart in a "content editor web part", and I have embedded the object tag to call the .xap file and its working totally fine. I need to make it work using the built-in Silverlight web part.

    Read the article

  • What components and IDE add-ins do you install with Delphi?

    - by Mick
    After a clean install of Delphi, what components and IDE add-ins do you make certain that you install? What's your Delphi "rig"? Here's what I install after a clean installation: Delphi 2007 JCL / JVCL - JEDI Code Library and JEDI Visual Code Library (600+ components) JWA / JWSCL - JEDI API Library & Security Code Library GExperts - GExperts is a free set of tools built to increase the productivity of Delphi and C++Builder programmers by adding several features to the IDE. TWM's experimental GExperts code formatter - adds code formatting capabilities to Delphi Virtual TreeView - Virtual Treeview is a treeview control built from ground up. More than 5 years of development made it one of the most flexible and advanced tree controls available today. MustangPeak Components (EasyList View, Virtual ShellTools, etc) - EasyListview is a control that has no dependance on the Microsoft Listview control but has all the features of the latest version from Microsoft. Also includes 'Explorer.exe' like shell components. Synapse lightweight networking components - contains simple low level non-visual objects for easy programming without problems. (no required multi-threaded synchronization, no need for windows message processing,…) Great for command line utilities, visual projects, NT services EurekaLog - EurekaLog is a complete bug resolution tool for Delphi and C++Builder developers that gives your application the power to catch every exception and memory leak, directly on the end user PC, generating a detailed log of the call stack (with file, class, method and line number), optionally sending you a copy of each log entry via email or to a web bug-tracker. DelphiSpeedUp - DelphiSpeedUp is an IDE plugin for Delphi and C++Builder. It improves the IDE’s startup speed and increases the general speed of the whole IDE. DDevExtensions - DDevExtensions extends the Delphi/C++Builder IDE by adding some new productivity features. IDE Fix Pack - The IDE Fix Pack installs is a DLL-Expert that fixes the following RAD Studio 2007 bugs at runtime. All changes are done in memory. No file on disk is modified. TPerlRegex - Regular Expression library for Delphi How about other Delphi developers?

    Read the article

  • Visual Studio 2008 Preprocessor wierdness

    - by Canacourse
    We have set-up a simple versioning system for our builds to ensure the built files always indicate whether they are Beta Debug or Beta Release builds I moved the file version info to to myapp.rc2 and created version.h // version.h // _DEBUG is defined by VS #define _BETA #ifdef _BETA #define FILE_DESC1 _T("BETA ") #else #define FILE_DESC1 // blank on purpose #endif #ifdef _DEBUG #define FILE_DESC2 _T("Debug Version ") #else #define FILE_DESC2 _T("Release Version ") // this is greyed out in the ide when building #endif #define FILE_DESC FILE_DESC1 FILE_DESC2 // myapp.rc2 include "version.h" #ifndef _T #define _T(x) x #endif VS_VERSION_INFO VERSIONINFO FILEVERSION PROD_VER_MJR,PROD_VER_MIN,PROD_VER_UPD,JOBUILDER_BUILD PRODUCTVERSION PROD_VER_MJR,PROD_VER_MIN FILEFLAGSMASK 0x3fL #ifdef _DEBUG FILEFLAGS 0x1L #else FILEFLAGS 0x0L #endif FILEOS 0x4L FILETYPE 0x1L FILESUBTYPE 0x0L BEGIN BLOCK "StringFileInfo" BEGIN BLOCK "040904e4" BEGIN VALUE "CompanyName", COMPANY_NAME VALUE "FileDescription", FILE_DESC VALUE "FileVersion", JOBBUI_VERSION VALUE "InternalName", "MyApp.exe" VALUE "LegalCopyright", COPYRIGHT VALUE "OriginalFilename", "MyApp.exe" VALUE "ProductName", PRODUCT_NAME VALUE "ProductVersion", PRODUCT_VERSION VALUE "Comments", COMMENTS END END BLOCK "VarFileInfo" BEGIN VALUE "Translation", 0x409, 1252 END END However when the exe is built in the debug output directory the file description always incorrectly says "BETA Release Version" instead of "BETA Debug Version" Yet the IDE indicates that "#define FILE_DESC2 _T("Debug Version ")" would be used. Why might this be? I have used these files on another project and they work correctly. Thank You...

    Read the article

  • Two different assembly versions "The located assembly's manifest definition does not match the assem

    - by snicker
    I have a project that I am working on that requires the use of the Mysql Connector for NHibernate, (Mysql.Data.dll). I also want to reference another project (Migrator.NET) in the same project. The problem is even though Migrator.NET is built with the reference to MySql.Data with specific version = false, it still tries to reference the older version of MySql.Data that the library was built with instead of just using the version that is there.. and I get the exception listed in the title: ---- System.IO.FileLoadException : Could not load file or assembly 'MySql.Data, Version=1.0.10.1, Culture=neutral, PublicKeyToken=c5687fc88969c44d' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) The version I am referencing in the main assembly is 6.1.3.0. How do I get the two assemblies to cooperate? Edit: For those of you specifying Assembly Binding Redirection, I have set this up: <?xml version="1.0" encoding="utf-8" ?> <configuration> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="MySql.Data" publicKeyToken="c5687fc88969c44d" culture="neutral"/> <bindingRedirect oldVersion="0.0.0.0-6.1.3.0" newVersion="6.1.3.0"/> </dependentAssembly> </assemblyBinding> </runtime> </configuration> I am referencing this the main assembly in another project and still getting the same errors. If my main assembly is copied local to be used in the other assembly, will it use the settings in app.config or does this information have to be included with every application or assembly that references my main assembly?

    Read the article

  • How to arrange models, views, controllers in a new Kohana 3 project

    - by Pekka
    I'm looking at how to set up a mid-sized web application with Kohana 3. I have implemented MVC patterns in the past but never worked against a "formalized" MVC framework so I'm still getting my head around the terminology - toying around with basic examples, building views and templates, and so on. I'm progressing fairly well but I want to set up a real-world web project (one of my own that I've been planning for quite some time now) as a learning object. Example-based documentation is a bit sparse for Kohana 3 right now - they say so themselves on the site. While I'm not worried about learning the framework soon enough, I'm a bit at a loss on how to arrange a healthy code base from the start - i.e. how to split up controllers, how to name them, and how to separate the functionality into the appropriate models. My application could, in its core, be described as a business directory with a main businesses table. Businesses can be listed by category and by street name. Each business has a detail page. Business owners can log in and edit their business's entry. Businesses can post offers into an offers table. I know this is pretty vague, I'll be more than happy to go into more detail on request. Supposing I have all the basic functionality worked out and in place already - list all businesses, edit business, list businesses by street name, create offer, and so on, and I'm just looking for how to fit the functionality into a Kohana application structure that can be easily extended: Do you know real-life, publicly accessible examples of applications built on Kohana 3 where I could take a peek how they do it? Are there conventions or best practices on how to structure an extendable login area for end users in a Kohana project that is not only able to handle a business directory page, but further products on separate pages as well? Do you know application structuring HOWTOs or best practices for Kohana 3 not mentioned in the user guide and the inofficial Wiki? Have you built something similar and could give me some recommendations?

    Read the article

  • SCM for Xcode?

    - by Gregor
    I am developing an application for the Mac as a small team (me + another person) effort. We are located in different cities, and have started to see the need for solid source control management. None of us have any experience with this, and both of us are relatively new to Cocoa/Obj-C/Xcode (but do have C knowledge). Does anyone have any recommendations as to which SCM system to choose? I understand that a lot of people are using Subversion, which is also supported in Xcode 3.1. Does anyone have experience with using Subversion through Xcode? Or is it a better option to chose a stand alone GUI alternative, such as Versions? Grateful for any input on this. Gregor Tomasevic, Sweden Update/personal experiences: Since this post, we have tried Versions and Cornerstone (both of which are SVN GUI-clients), as well as Xcodes built-in support for SVN. We were not particularly pleased with Versions, which seemed to have some problems with committing unversioned files/build files. The built-in SVN support in Xcode works quite well, although it probably has limitations that we have still not run into. Cornerstone is both simple to use and powerful, and does not seem to suffer from the problems we encountered with Versions. So far, we have just tried committing, updating repo, checking out latest/previous versions of our files and worked some with file comparison. It might be a whole different ball game once you start working extensively with branching, an area which we have been told both these GUI clients might have some weaknesses in. For what it's worth (and with only days of evaluation) Cornerstone seems to be a somewhat better alternative, although for simpler SCM, Xcode works well too. Thanks for all the comments.

    Read the article

  • Extract known pattern substring from NSString (without regex)

    - by d11wtq
    I'm really tempted to drop RegexKit (or my own libpcre wrapper) into my project in order to do this, but before I do that I want to know how Cocoa developers manage to do half of this basic stuff without really convoluted code or without linking with RegexKit or another regular expression library. I find it gobsmacking that Cocoa does not include any regular expression matching features. I've so accustomed to using regular expressions for all kinds of things that I'm lost without them. I can do what I need without them, but the code would be rather convoluted. So, Cocoa devs, I ask you, what's the "Cocoa way" to do this... The problem is an everyday problem in programming as far as I'm concerned. Cocoa must have ways of doing this with the built-in features. Note that the position of the elements I want to match changes, and sometimes "quotes" are present. Whitespace is variable. Take the following strings: Content-Type: application/xml; charset=utf-8 Content-Type: text/html; charset="iso-8859-1" Content-Type: text/plain; charset=us-ascii Content-Type: text/plain; name="example.txt"; charset=utf-8 From all of these strings, how would you go about determining the mime type (e.g. text/plain) and the charset (e.g. utf-8) using just the built-in Cocoa classes? I'd end up performing a series of -rangeOfString: and substring calls, with conditional checks to deal with the optional quotes etc. Is there a way to do this with NSScanner? The NSScanner class seems to have a pretty naive API to me. Something like C's sscanf() that works for NSString objects would be an ideal fit. Most of my string parsing needs are simple such as this example so maybe regular expressions, while I'm accustomed to them, are overkill?

    Read the article

  • trigger config transformation in TFS 2010 or msbuild

    - by grenade
    I'm attempting to make use of configuration transformations in a continuous integration environment. I need a way to tell the TFS build agent to perform the transformations. I was kind of hoping it would just work after discovering the config transform files (web.qa-release.config, web.production-release.config, etc...). But it doesn't. I have a TFS build definition that builds the right configurations (qa-release, production-release, etc...) and I have some specific .proj files that get built within these definitions and those contain some environment specific parameters eg: <PropertyGroup Condition=" '$(Configuration)'=='production-release' "> <TargetHost Condition=" '$(TargetHost)'=='' ">qa.web</TargetHost> ... </PropertyGroup> <PropertyGroup Condition=" '$(Configuration)'=='qa-release' "> <TargetHost Condition=" '$(TargetHost)'=='' ">production.web</TargetHost> ... </PropertyGroup> I know from the output that the correct configurations are being built. Now I just need to learn how to trigger the config transformations. Is there some hocus pocus that I can add to the final .proj in the build to kick off the transform and blow away the individual transform files?

    Read the article

  • Making REST request using LWP::Simple

    - by Alienfluid
    I am trying to use LWP::Simple to make a GET request to a REST service. Here's the simple code: use LWP::Simple; $uri = "http://api.stackoverflow.com/0.8/questions/tagged/php"; $jsonresponse= get $uri; print $jsonresponse; On my local machine, running Ubuntu 10.4, and Perl version 5.10.1: farhan@farhan-lnx:~$ perl --version This is perl, v5.10.1 (*) built for x86_64-linux-gnu-thread-multi I can get the correct response and have it printed on the screen. E.g.: farhan@farhan-lnx:~$ head -10 output.txt { "total": 1000, "page": 1, "pagesize": 30, "questions": [ { "tags": [ "php", "arrays", "coding-style" (... snipped ...) But on my host's machine to which I SSH into, I get garbage printed on the screen for the same exact code. I am assuming it has something to do with the encoding, but the REST service does not return the character set type in the response, so how do I force LWP::Simple to use the correct encoding? Any ideas what may be going on here? Here's the version of Perl on my host's machine: [dredd]$ perl --version This is perl, v5.8.8 built for x86_64-linux-gnu-thread-multi

    Read the article

  • AndEngine VS Android's Canvas VS OpenGLES - For rendering a 2D indoor vector map

    - by Orchestrator
    This is a big issue for me I'm trying to figure out for a long time already. I'm working on an application that should include a 2D vector indoor map in it. The map will be drawn out from an .svg file that will specify all the data of the lines, curved lines (path) and rectangles that should be drawn. My main requirement from the map are Support touch events to detect where exactly the finger is touching. Great image quality especially when considering the drawings of curved and diagonal lines (anti-aliasing) Optional but very nice to have - Built in ability to zoom, pan and rotate. So far I tried AndEngine and Android's canvas. With AndEngine I had troubles with implementing anti-aliasing for rendering smooth diagonal lines or drawing curved lines, and as far as I understand, this is not an easy thing to implement in AndEngine. Though I have to mention that AndEngine's ability to zoom in and pan with the camera instead of modifying the objects on the screen was really nice to have. I also had some little experience with the built in Android's Canvas, mainly with viewing simple bitmaps, but I'm not sure if it supports all of these things, and especially if it would provide smooth results. Last but no least, there's the option of just plain OpenGLES 1 or 2, that as far as I understand, with enough work should be able to support all the features I require. However it seems like something that would be hard to implement. And I've never programmed in OpenGL or anything like it, but I'm willing very much to learn. To sum it all up, I need a platform that would provide me with the ability to do the 3 things I mentioned before, but also very important - To allow me to implement this feature as fast as possible. Any kind of answer or suggestion would be very much welcomed as I'm very eager to solve this problem! Thanks!

    Read the article

  • For "draggable" div tags that are NOT nested: JQuery/JavaScript div tag “containment” approach/algor

    - by Pete Alvin
    Background: I've created an online circuit design application where .draggable() div tags are containers that contain smaller div containers and so forth. Question: For any particular div tag I need to quickly identify if it contains other div tags (that may in turn contain other div tags). -- Since the div tags are draggable, in the DOM they are NOT nested inside each other but I think are absolutely positioned. So I think that a "hit testing" approach is the only way to determine containment, unless there is some "secret" routine built-in somewhere that could help with this. I've searched JQuery and I don't see any built-in routine for this. Does anyone know of an algorithm that's quicker than O(n^2)? Seems like I have to walk the list of div tags in an outer loop (n) and have an inner loop (another n) to compare against all other div tags and do a "containment test" (position, width, height), building a list of contained div tags. That's n-squared. Then I have to build a list of all nested div tags by concatenating contained lists. So the total would be O(n^2)+n. There must be a better way?

    Read the article

  • Why does my REST request return garbage data?

    - by Alienfluid
    I am trying to use LWP::Simple to make a GET request to a REST service. Here's the simple code: use LWP::Simple; $uri = "http://api.stackoverflow.com/0.8/questions/tagged/php"; $jsonresponse= get $uri; print $jsonresponse; On my local machine, running Ubuntu 10.4, and Perl version 5.10.1: farhan@farhan-lnx:~$ perl --version This is perl, v5.10.1 (*) built for x86_64-linux-gnu-thread-multi I can get the correct response and have it printed on the screen. E.g.: farhan@farhan-lnx:~$ head -10 output.txt { "total": 1000, "page": 1, "pagesize": 30, "questions": [ { "tags": [ "php", "arrays", "coding-style" (... snipped ...) But on my host's machine to which I SSH into, I get garbage printed on the screen for the same exact code. I am assuming it has something to do with the encoding, but the REST service does not return the character set type in the response, so how do I force LWP::Simple to use the correct encoding? Any ideas what may be going on here? Here's the version of Perl on my host's machine: [dredd]$ perl --version This is perl, v5.8.8 built for x86_64-linux-gnu-thread-multi

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >