Search Results

Search found 44645 results on 1786 pages for 'manage add ons'.

Page 351/1786 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • Manually manipulating ArrayList

    - by jsan
    I have an assignment where I have to create a deque, however I am not allowed to use any built-in classes or interfaces. I am implementing my deque using an array list. My problem is that when I have to, for instance, add to the beginning of the array list (beginning of the queue), i am not allowed to do this: public void addFirst(ArrayList<Integer> array) { array.add(0, int); } Is there a way to do this without using the add() function? Such as manually adding to the front and shifting the rest of the array to the right? Or maybe creating a new array list and copying...I'm not sure. Any help would be great; I have a bunch of functions to write, and getting the first one done will definitely put me in the right direction. Thanks

    Read the article

  • accessing ok.DialogResult inside a function

    - by sayyad
    I am writing function which accepts user input from textbox. Action will take place when ok button is clicked. i made button(DialogResult=OK). code looks like private void canny() { // if (this.DialogResult == DialogResult.OK) { MessageBox.Show("ok"); } //to Do } But I can't see any messagebox. What I am missing. private void ok_Click(object sender, EventArgs e) { // should I add here some thing } Should I add control Form1.Control.Add(Ok)?????????????????? if yes in which part of code. regards,

    Read the article

  • SQL Check Constraint cannot reference other column

    - by user1777711
    I trying to add this sql check in ALTER TABLE School add Role check_role CHECK (check_role IN ('Teaching Assistant', 'Lecturer', 'Professor')); I get the error below ERROR at line 3: ORA-02438: Column check constraint cannot reference other columns SQL> desc School; Name Null? Type ----------------------------------------- -------- ---------------------------- STAFFNUM NOT NULL VARCHAR2(12) NAME NOT NULL VARCHAR2(50) ADDRESS NOT NULL VARCHAR2(300) DOB DATE I am trying add a column call Role, with the check constraint check_role I am using Oracle SQL. Thanks for all help!

    Read the article

  • Android move data between activities

    - by Parhs
    Example I have 4 activities. A B C D A is a list of objects. B C D are used to add a new object. So A calls B then C then D . However C or D have button to 'add the object' When going up from B or C or D to activity A , A should have a List with these objects. Even if user tap back items should be 'saved' However I dont want to start a new Intent at C or D because i dont want user to go back to A and then click again add new item. I have been adviced to keep these data at application object. Is there any other suggestion ?

    Read the article

  • DropDownList and ViewData

    - by Petko Xyz
    In my controller I have: Category x = new Category(1, "prva", 0); Category y = new Category(2, "vtora", 1); Category z = new Category(3, "treta", 1); List<Category> categories = new List<Category>(); categories.Add(x); categories.Add(y); categories.Add(z); ViewData["categories"] = categories; And in my view I have: <%= Html.DropDownList("categories")%> But I have an error: The ViewData item that has the key 'categories' is of type 'System.Collections.Generic.List`1[[Category, MvcApplication1, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]]' but must be of type 'IEnumerable'. Uggggghhh how to resolve this?

    Read the article

  • Nested BooleanQuery?

    - by KailZhang
    I'm using a BooleanQuery to combine several queries. I find that if I add a BooleanQuery to the BooleanQuery, then no result is returned. The added BooleanQuery is a MUST_NOT one, like -city_id:100. But as lucene's spec says, BooleanQuery could be nested, which I think means it's okay to add such BooleanQuery. Now I have to get all clauses from the BooleanQuery to be added, and then add them to the container BooleanQuery one by one. I'm a bit confused. Anybody could help? Thank you very much!

    Read the article

  • Obtaining Index value of dictionary

    - by Maudise
    I have a piece of code which looks at the following public Test As Dictionary(Of String, String()) Which is brought in tester = New Dictionary(Of String, String()) tester.add("Key_EN", {"Option 1_EN", "Option 2_EN", "Option 3_EN"}) tester.add("Key_FR", {"Option 1_FR", "Option 2_FR", "Option 3_FR"}) tester.add("Key_DE", {"Option 1_DE", "Option 2_DE", "Option 3_DE"}) There's then a combo box which looks at the following dim Language as string Language = "_EN" ' note this is done by a drop down combo box to select _EN or _FR etc. cboTestBox.items.AddRange(tester("Key" & Language)) What I need to be able to do is to see what index position the answer is in and convert it back to the Key_EN. So, for example _DE is selected, then the options of "Option 1_DE", "Option 2_DE", "Option 3_DE" would be displayed. If they chose Option 3_DE then I need to be able to convert this to Option 3_EN. Many thanks Maudise

    Read the article

  • Windows Phone 7 Control Caching - 'Element is already the child of another element'

    - by will
    I'm trying to speed up my windows phone 7 page load times. I have a 'static' page that has a dynamically created in a Panorama control - static meaning that the content never changes. On the first load I look at my config file, create the individual PanoramaItem controls and add them to the main Panorama control. I'm trying to keep a List in a static place so that the initial creation would only happen once and I could just add a fully rendered version to my Panorama control when the page was rendered. Works fine on first load, but when I try to add the cached PanoramaItems to the Panorama control I get the message "Element is already the child of another element". This makes sense since I already added before. But I can see a way to disconnect the PanoramaItems with the first Panorama control... I could be going about the control caching thing all wrong as well... Let me know if there's another way to do this.

    Read the article

  • UIButton At Run Time

    - by rami
    I want to create UIButton At Run Time. I Have main view in which I have Table View when I select Row it Open Sub View with Toolbar and in that I Click on Add Button on tool Bar it opens the another view in which I add My value(like name)now when I press save Button It goes back to the sub View with Toolbar and it create button with name which I save and add it in sub view But problem is when I do it again all things done well But The Button Which created previous is not seen in sub view.It display only button with name Which I save Last.

    Read the article

  • Constructors and method chaining in JavaScript

    - by Sethen Maleno
    I am trying to make method chaining work in conjunction with my constructors, but I am not exactly sure how to go about it. Here is my code thus far: function Points(one, two, three) { this.one = one; this.two = two; this.three = three; } Points.prototype = { add: function() { return this.result = this.one + this.two + this.three; }, multiply: function() { return this.result * 30; } } var some = new Points(1, 1, 1); console.log(some.add().multiply()); I am trying to call the multiply method on the return value of the add method. I know there is something obvious that I am not doing, but I am just not sure what it is. Any thoughts?

    Read the article

  • Why can't I set a layout for a page in a QStackedWidget in designer?

    - by cppguy
    Steps to reproduce in designer for Qt 4.8.0 I create a new dialog form in Qt Designer I add a QStackedWidget to the dialog I set the layout of the dialog to make the stacked layout size with the dialog I add a few controls to page one in the stacked widget I select the first page in the stacked widget in the right hand tree view of controls The icon next to that page (which is a QWidget) shows the page is missing a layout When I click one of the layout buttons above, it doesn't change the layout of the page QWidget, it changes the layout of the dialog even though I had explicitly selected the page. Is this a bug in designer? Am I missing something? I really don't want to add the layouts programatically as that prevents me from being able to layout the pages in designer in the same .ui file

    Read the article

  • My blackberry app will not display on simulator even thou it tells me I have no errors

    - by user1334120
    I am trying to run this code on my blakberry simulator, but it will not appear on the main menu. Can anybody please help me out. import net.rim.device.api.ui.FieldChangeListener; import net.rim.device.api.ui.Field; import net.rim.device.api.ui.UiApplication; import net.rim.device.api.ui.component.ButtonField; import net.rim.device.api.ui.component.LabelField; import net.rim.device.api.ui.container.MainScreen; public class FirstScreen extends MainScreen implements FieldChangeListener { ButtonField theButton; public FirstScreen() { add(new LabelField("First Screen")); theButton = new ButtonField("New Screen", ButtonField.CONSUME_CLICK); theButton.setChangeListener(this); add(theButton); } public void fieldChanged(Field field, int context) { if (field == theButton) { UiApplication.getUiApplication().pushScreen(new SecondScreen()); } } public class SecondScreen extends MainScreen { public SecondScreen() { add(new LabelField("Second Screen")); } } }

    Read the article

  • How to set ID of item when the item is not yet created..?

    - by hoppl
    I'll try to make myself clear with an example: Imagine an admin page to add a product to a database (product list). When I open the ADD ITEM page, the item is not yet created (not until I click the submit button), but let's say I want to add categories this product will appear in (with AJAX for example). When I run the AJAX script, i need to tell it which ID (my product) to put these categories in... How should I do this.? Is inserting a blank item in the database (to get mysql_insert_id) when the page opens a good way to do this.? Is it prone to conflicts or errors..? How do you guys do it.?

    Read the article

  • cant figure out pointer assignment in c

    - by vadiklk
    int add(char *var1, char *var2, char **var3) { int num1, num2, length1 = strlen(var1), length2 = strlen(var2), length = max(length1, length2) + 1; char *result = (char*) calloc(length, sizeof(char)); ... free(*var3); *var3 = result; return 0; } out side of the function i get its still nothing(var3); more detail: int addSubCommand(char **vars, int isAdd) { ... return add(vars[index1], var2, &(vars[index3])); } that's where i call add. the char** vars goes from every function to the other.

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

  • 10 Windows Tweaking Myths Debunked

    - by Chris Hoffman
    Windows is big, complicated, and misunderstood. You’ll still stumble across bad advice from time to time when browsing the web. These Windows tweaking, performance, and system maintenance tips are mostly just useless, but some are actively harmful. Luckily, most of these myths have been stomped out on mainstream sites and forums. However, if you start searching the web, you’ll still find websites that recommend you do these things. Erase Cache Files Regularly to Speed Things Up You can free up disk space by running an application like CCleaner, another temporary-file-cleaning utility, or even the Windows Disk Cleanup tool. In some cases, you may even see an old computer speed up when you erase a large amount of useless files. However, running CCleaner or similar utilities every day to erase your browser’s cache won’t actually speed things up. It will slow down your web browsing as your web browser is forced to redownload the files all over again, and reconstruct the cache you regularly delete. If you’ve installed CCleaner or a similar program and run it every day with the default settings, you’re actually slowing down your web browsing. Consider at least preventing the program from wiping out your web browser cache. Enable ReadyBoost to Speed Up Modern PCs Windows still prompts you to enable ReadyBoost when you insert a USB stick or memory card. On modern computers, this is completely pointless — ReadyBoost won’t actually speed up your computer if you have at least 1 GB of RAM. If you have a very old computer with a tiny amount of RAM — think 512 MB — ReadyBoost may help a bit. Otherwise, don’t bother. Open the Disk Defragmenter and Manually Defragment On Windows 98, users had to manually open the defragmentation tool and run it, ensuring no other applications were using the hard drive while it did its work. Modern versions of Windows are capable of defragmenting your file system while other programs are using it, and they automatically defragment your disks for you. If you’re still opening the Disk Defragmenter every week and clicking the Defragment button, you don’t need to do this — Windows is doing it for you unless you’ve told it not to run on a schedule. Modern computers with solid-state drives don’t have to be defragmented at all. Disable Your Pagefile to Increase Performance When Windows runs out of empty space in RAM, it swaps out data from memory to a pagefile on your hard disk. If a computer doesn’t have much memory and it’s running slow, it’s probably moving data to the pagefile or reading data from it. Some Windows geeks seem to think that the pagefile is bad for system performance and disable it completely. The argument seems to be that Windows can’t be trusted to manage a pagefile and won’t use it intelligently, so the pagefile needs to be removed. As long as you have enough RAM, it’s true that you can get by without a pagefile. However, if you do have enough RAM, Windows will only use the pagefile rarely anyway. Tests have found that disabling the pagefile offers no performance benefit. Enable CPU Cores in MSConfig Some websites claim that Windows may not be using all of your CPU cores or that you can speed up your boot time by increasing the amount of cores used during boot. They direct you to the MSConfig application, where you can indeed select an option that appears to increase the amount of cores used. In reality, Windows always uses the maximum amount of processor cores your CPU has. (Technically, only one core is used at the beginning of the boot process, but the additional cores are quickly activated.) Leave this option unchecked. It’s just a debugging option that allows you to set a maximum number of cores, so it would be useful if you wanted to force Windows to only use a single core on a multi-core system — but all it can do is restrict the amount of cores used. Clean Your Prefetch To Increase Startup Speed Windows watches the programs you run and creates .pf files in its Prefetch folder for them. The Prefetch feature works as a sort of cache — when you open an application, Windows checks the Prefetch folder, looks at the application’s .pf file (if it exists), and uses that as a guide to start preloading data that the application will use. This helps your applications start faster. Some Windows geeks have misunderstood this feature. They believe that Windows loads these files at boot, so your boot time will slow down due to Windows preloading the data specified in the .pf files. They also argue you’ll build up useless files as you uninstall programs and .pf files will be left over. In reality, Windows only loads the data in these .pf files when you launch the associated application and only stores .pf files for the 128 most recently launched programs. If you were to regularly clean out the Prefetch folder, not only would programs take longer to open because they won’t be preloaded, Windows will have to waste time recreating all the .pf files. You could also modify the PrefetchParameters setting to disable Prefetch, but there’s no reason to do that. Let Windows manage Prefetch on its own. Disable QoS To Increase Network Bandwidth Quality of Service (QoS) is a feature that allows your computer to prioritize its traffic. For example, a time-critical application like Skype could choose to use QoS and prioritize its traffic over a file-downloading program so your voice conversation would work smoothly, even while you were downloading files. Some people incorrectly believe that QoS always reserves a certain amount of bandwidth and this bandwidth is unused until you disable it. This is untrue. In reality, 100% of bandwidth is normally available to all applications unless a program chooses to use QoS. Even if a program does choose to use QoS, the reserved space will be available to other programs unless the program is actively using it. No bandwidth is ever set aside and left empty. Set DisablePagingExecutive to Make Windows Faster The DisablePagingExecutive registry setting is set to 0 by default, which allows drivers and system code to be paged to the disk. When set to 1, drivers and system code will be forced to stay resident in memory. Once again, some people believe that Windows isn’t smart enough to manage the pagefile on its own and believe that changing this option will force Windows to keep important files in memory rather than stupidly paging them out. If you have more than enough memory, changing this won’t really do anything. If you have little memory, changing this setting may force Windows to push programs you’re using to the page file rather than push unused system files there — this would slow things down. This is an option that may be helpful for debugging in some situations, not a setting to change for more performance. Process Idle Tasks to Free Memory Windows does things, such as creating scheduled system restore points, when you step away from your computer. It waits until your computer is “idle” so it won’t slow your computer and waste your time while you’re using it. Running the “Rundll32.exe advapi32.dll,ProcessIdleTasks” command forces Windows to perform all of these tasks while you’re using the computer. This is completely pointless and won’t help free memory or anything like that — all you’re doing is forcing Windows to slow your computer down while you’re using it. This command only exists so benchmarking programs can force idle tasks to run before performing benchmarks, ensuring idle tasks don’t start running and interfere with the benchmark. Delay or Disable Windows Services There’s no real reason to disable Windows services anymore. There was a time when Windows was particularly heavy and computers had little memory — think Windows Vista and those “Vista Capable” PCs Microsoft was sued over. Modern versions of Windows like Windows 7 and 8 are lighter than Windows Vista and computers have more than enough memory, so you won’t see any improvements from disabling system services included with Windows. Some people argue for not disabling services, however — they recommend setting services from “Automatic” to “Automatic (Delayed Start)”. By default, the Delayed Start option just starts services two minutes after the last “Automatic” service starts. Setting services to Delayed Start won’t really speed up your boot time, as the services will still need to start — in fact, it may lengthen the time it takes to get a usable desktop as services will still be loading two minutes after booting. Most services can load in parallel, and loading the services as early as possible will result in a better experience. The “Delayed Start” feature is primarily useful for system administrators who need to ensure a specific service starts later than another service. If you ever find a guide that recommends you set a little-known registry setting to improve performance, take a closer look — the change is probably useless. Want to actually speed up your PC? Try disabling useless startup programs that run on boot, increasing your boot time and consuming memory in the background. This is a much better tip than doing any of the above, especially considering most Windows PCs come packed to the brim with bloatware.     

    Read the article

  • Announcing: Great Improvements to Windows Azure Web Sites

    - by ScottGu
    I’m excited to announce some great improvements to the Windows Azure Web Sites capability we first introduced earlier this summer.  Today’s improvements include: a new low-cost shared mode scaling option, support for custom domains with shared and reserved mode web-sites using both CNAME and A-Records (the later enabling naked domains), continuous deployment support using both CodePlex and GitHub, and FastCGI extensibility.  All of these improvements are now live in production and available to start using immediately. New “Shared” Scaling Tier Windows Azure allows you to deploy and host up to 10 web-sites in a free, shared/multi-tenant hosting environment. You can start out developing and testing web sites at no cost using this free shared mode, and it supports the ability to run web sites that serve up to 165MB/day of content (5GB/month).  All of the capabilities we introduced in June with this free tier remain the same with today’s update. Starting with today’s release, you can now elastically scale up your web-site beyond this capability using a new low-cost “shared” option (which we are introducing today) as well as using a “reserved instance” option (which we’ve supported since June).  Scaling to either of these modes is easy.  Simply click on the “scale” tab of your web-site within the Windows Azure Portal, choose the scaling option you want to use with it, and then click the “save” button.  Changes take only seconds to apply and do not require any code to be changed, nor the app to be redeployed: Below are some more details on the new “shared” option, as well as the existing “reserved” option: Shared Mode With today’s release we are introducing a new low-cost “shared” scaling mode for Windows Azure Web Sites.  A web-site running in shared mode is deployed in a shared/multi-tenant hosting environment.  Unlike the free tier, though, a web-site in shared mode has no quotas/upper-limit around the amount of bandwidth it can serve.  The first 5 GB/month of bandwidth you serve with a shared web-site is free, and then you pay the standard “pay as you go” Windows Azure outbound bandwidth rate for outbound bandwidth above 5 GB. A web-site running in shared mode also now supports the ability to map multiple custom DNS domain names, using both CNAMEs and A-records, to it.  The new A-record support we are introducing with today’s release provides the ability for you to support “naked domains” with your web-sites (e.g. http://microsoft.com in addition to http://www.microsoft.com).  We will also in the future enable SNI based SSL as a built-in feature with shared mode web-sites (this functionality isn’t supported with today’s release – but will be coming later this year to both the shared and reserved tiers). You pay for a shared mode web-site using the standard “pay as you go” model that we support with other features of Windows Azure (meaning no up-front costs, and you pay only for the hours that the feature is enabled).  A web-site running in shared mode costs only 1.3 cents/hr during the preview (so on average $9.36/month). Reserved Instance Mode In addition to running sites in shared mode, we also support scaling them to run within a reserved instance mode.  When running in reserved instance mode your sites are guaranteed to run isolated within your own Small, Medium or Large VM (meaning no other customers run within it).  You can run any number of web-sites within a VM, and there are no quotas on CPU or memory limits. You can run your sites using either a single reserved instance VM, or scale up to have multiple instances of them (e.g. 2 medium sized VMs, etc).  Scaling up or down is easy – just select the “reserved” instance VM within the “scale” tab of the Windows Azure Portal, choose the VM size you want, the number of instances of it you want to run, and then click save.  Changes take effect in seconds: Unlike shared mode, there is no per-site cost when running in reserved mode.  Instead you pay only for the reserved instance VMs you use – and you can run any number of web-sites you want within them at no extra cost (e.g. you could run a single site within a reserved instance VM or 100 web-sites within it for the same cost).  Reserved instance VMs start at 8 cents/hr for a small reserved VM.  Elastic Scale-up/down Windows Azure Web Sites allows you to scale-up or down your capacity within seconds.  This allows you to deploy a site using the shared mode option to begin with, and then dynamically scale up to the reserved mode option only when you need to – without you having to change any code or redeploy your application. If your site traffic starts to drop off, you can scale back down the number of reserved instances you are using, or scale down to the shared mode tier – all within seconds and without having to change code, redeploy, or adjust DNS mappings.  You can also use the “Dashboard” view within the Windows Azure Portal to easily monitor your site’s load in real-time (it shows not only requests/sec and bandwidth but also stats like CPU and memory usage). Because of Windows Azure’s “pay as you go” pricing model, you only pay for the compute capacity you use in a given hour.  So if your site is running most of the month in shared mode (at 1.3 cents/hr), but there is a weekend when it gets really popular and you decide to scale it up into reserved mode to have it run in your own dedicated VM (at 8 cents/hr), you only have to pay the additional pennies/hr for the hours it is running in the reserved mode.  There is no upfront cost you need to pay to enable this, and once you scale back down to shared mode you return to the 1.3 cents/hr rate.  This makes it super flexible and cost effective. Improved Custom Domain Support Web sites running in either “shared” or “reserved” mode support the ability to associate custom host names to them (e.g. www.mysitename.com).  You can associate multiple custom domains to each Windows Azure Web Site.  With today’s release we are introducing support for A-Records (a big ask by many users). With the A-Record support, you can now associate ‘naked’ domains to your Windows Azure Web Sites – meaning instead of having to use www.mysitename.com you can instead just have mysitename.com (with no sub-name prefix).  Because you can map multiple domains to a single site, you can optionally enable both a www and naked domain for a site (and then use a URL rewrite rule/redirect to avoid SEO problems). We’ve also enhanced the UI for managing custom domains within the Windows Azure Portal as part of today’s release.  Clicking the “Manage Domains” button in the tray at the bottom of the portal now brings up custom UI that makes it easy to manage/configure them: As part of this update we’ve also made it significantly smoother/easier to validate ownership of custom domains, and made it easier to switch existing sites/domains to Windows Azure Web Sites with no downtime. Continuous Deployment Support with Git and CodePlex or GitHub One of the more popular features we released earlier this summer was support for publishing web sites directly to Windows Azure using source control systems like TFS and Git.  This provides a really powerful way to manage your application deployments using source control.  It is really easy to enable this from a website’s dashboard page: The TFS option we shipped earlier this summer provides a very rich continuous deployment solution that enables you to automate builds and run unit tests every time you check in your web-site, and then if they are successful automatically publish to Azure. With today’s release we are expanding our Git support to also enable continuous deployment scenarios and integrate with projects hosted on CodePlex and GitHub.  This support is enabled with all web-sites (including those using the “free” scaling mode). Starting today, when you choose the “Set up Git publishing” link on a website’s “Dashboard” page you’ll see two additional options show up when Git based publishing is enabled for the web-site: You can click on either the “Deploy from my CodePlex project” link or “Deploy from my GitHub project” link to walkthrough a simple workflow to configure a connection between your website and a source repository you host on CodePlex or GitHub.  Once this connection is established, CodePlex or GitHub will automatically notify Windows Azure every time a checkin occurs.  This will then cause Windows Azure to pull the source and compile/deploy the new version of your app automatically.  The below two videos walkthrough how easy this is to enable this workflow and deploy both an initial app and then make a change to it: Enabling Continuous Deployment with Windows Azure Websites and CodePlex (2 minutes) Enabling Continuous Deployment with Windows Azure Websites and GitHub (2 minutes) This approach enables a really clean continuous deployment workflow, and makes it much easier to support a team development environment using Git: Note: today’s release supports establishing connections with public GitHub/CodePlex repositories.  Support for private repositories will be enabled in a few weeks. Support for multiple branches Previously, we only supported deploying from the git ‘master’ branch.  Often, though, developers want to deploy from alternate branches (e.g. a staging or future branch). This is now a supported scenario – both with standalone git based projects, as well as ones linked to CodePlex or GitHub.  This enables a variety of useful scenarios.  For example, you can now have two web-sites - a “live” and “staging” version – both linked to the same repository on CodePlex or GitHub.  You can configure one of the web-sites to always pull whatever is in the master branch, and the other to pull what is in the staging branch.  This enables a really clean way to enable final testing of your site before it goes live. This 1 minute video demonstrates how to configure which branch to use with a web-site. Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. We’ll have even more new features and enhancements coming in the weeks ahead – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5 next month).  Keep an eye out on my blog for details as these new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • CodePlex Daily Summary for Wednesday, June 05, 2013

    CodePlex Daily Summary for Wednesday, June 05, 2013Popular ReleasesQlikView Extension - Animated Scatter Chart: Animated Scatter Chart - v1.0: Version 1.0 including Source Code qar File Example QlikView application Tested With: Browser Firefox 20 (x64) Google Chrome 27 (x64) Internet Explorer 9 QlikView QlikView Desktop 11 - SR2 (x64) QlikView Desktop 11.2 - SR1 (x64) QlikView Ajax Client 11.2 - SR2 (based on x64)BarbaTunnel: BarbaTunnel 7.2: Warning: HTTP Tunnel is not compatible with version 6.x and prior, HTTP packet format has been changed. Check Version History for more information about this release.Web Pages CMS: 0.5: First public releaseHarvester - Debug Viewer for Trace, NLog & Log4Net: v2.0.1 (.NET 4.0): Minor Updates Fixed incorrect process naming being displayed if process ID reassigned before cache invalidated. Fixed incorrect event type/source for TraceListener.TraceData methods. Updated NLog package references. Official Documentation Moved to GitHub http://cbaxter.github.com/Harvester Official Source Moved to GitHub https://github.com/cbaxter/HarvesterSuperWebSocket, a .NET WebSocket Server: SuperWebSocket 0.8: This release includes these changes below: Upgrade SuperSocket to 1.5.3 which is much more stable Added handshake request validating api (WebSocketServer.ValidateHandshake(TWebSocketSession session, string origin)) Fixed a bug that the m_Filters in the SubCommandBase can be null if the command's method LoadSubCommandFilters(IEnumerable<SubCommandFilterAttribute> globalFilters) is not invoked Fixed the compatibility issue on Origin getting in the different version protocols Marked ISub...Impulse Media Player: Impulse Media Player 3.3.1.0: EchoNest Analyzer introduced Similar track feature (social tab) Last played tracks can be removed permanently pitch / tempo bar can be hiddenBlackJumboDog: Ver5.9.0: 2013.06.04 Ver5.9.0 (1) ?????????????????????????????????($Remote.ini Tmp.ini) (2) ThreadBaseTest?? (3) ????POP3??????SMTP???????????????? (4) Web???????、?????????URL??????????????? (5) Ftp???????、LIST?????????????? (6) ?????????????????????Media Companion: Media Companion MC3.569b: New* Movies - Autoscrape/Batch Rescrape extra fanart and or extra thumbs. * Movies - Alternative editor can add manually actors. * TV - Batch Rescraper, AutoScrape extrafanart, if option enabled. Fixed* Movies - Slow performance switching to movie tab by adding option 'Disable "Not Matching Rename Pattern"' to Movie Preferences - General. * Movies - Fixed only actors with images were scraped and added to nfo * Movies - Fixed filter reset if selected tab was above Home Movies. * Updated Medi...Nearforums - ASP.NET MVC forum engine: Nearforums v9.0: Version 9.0 of Nearforums with great new features for users and developers: SQL Azure support Admin UI for Forum Categories Avoid html validation for certain roles Improve profile picture moderation and support Warn, suspend, and ban users Web administration of site settings Extensions support Visit the Roadmap for more details. Webdeploy package sha1 checksum: 9.0.0.0: e687ee0438cd2b1df1d3e95ecb9d66e7c538293b eReading: eReading: ????,??CPU???????。 ??????????。Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.93: Added -esc:BOOL switch (CodeSettings.AlwaysEscapeNonAscii property) to always force non-ASCII character (ch > 0x7f) to be escaped as the JavaScript \uXXXX sequence. This switch should be used if creating a Symbol Map and outputting the result to the a text encoding other than UTF-8 or UTF-16 (ASCII, for instance). Fixed a bug where a complex comma operation is the operand of a return statement, and it was looking at the wrong variable for possible optimization of = to just .VG-Ripper & PG-Ripper: VG-Ripper 2.9.42: changes NEW: Added Support for "GatASexyCity.com" links NEW: Added Support for "ImgCloud.co" links NEW: Added Support for "ImGirl.info" links NEW: Added Support for "SexyImg.com" links FIXED: "ImageBam.com" linksDocument.Editor: 2013.22: What's new for Document.Editor 2013.22: Improved Bullet List support Improved Number List support Minor Bug Fix's, improvements and speed upsCarrotCake, an ASP.Net WebForms CMS: Binaries and PDFs - Zip Archive (v. 4.3 20130528): Features include a content management system and a robust featured blogging engine. This includes configurable date based blog post URLs, blog post content association with categories and tags, assignment/customization of category and tag URL patterns, simple blog post feedback collection and review, blog post pagination/indexes, designation of default blog page (required to make search, category links, or tag links function), URL date formatting patterns, RSS feed support for posts and pages...PHPExcel: PHPExcel 1.7.9: See Change Log for details of the new features and bugfixes included in this release, and methods that are now deprecated.Droid Explorer: Droid Explorer 0.8.8.10 Beta: Fixed issue with some people having a folder called "android-4.2.2" in their build-tools path. - 16223 patterns & practices: Data Access Guidance: Data Access Guidance Drop3 2013.05.31: Drop 3DotNet.Highcharts: DotNet.Highcharts 2.0 with Examples: DotNet.Highcharts 2.0 Tested and adapted to the latest version of Highcharts 3.0.1 Added new chart types: Arearange, Areasplinerange, Columnrange, Gauge, Boxplot, Waterfall, Funnel and Bubble Added new type PercentageOrPixel which represents value of number or number with percentage. Used for sizes, width, height, length, etc. Removed inheritances in YAxis option classes. Closed issues: 682: Missing property - XAxisPlotLinesLabel.Text 688: backgroundColor and plotBackgroundColor are...Umbraco CMS: Umbraco 6.1.1: Source codeLooking for the source code? We're not uploading that as a zip file any more because you can already get it from CodePlex, click this link and hit the "Download" link. BlogRead the release blog post for 6.1.0. Read the release blog post for 6.1.1. Getting Started Read the installation documentation: http://our.umbraco.org/documentation/Installation/ Check the free foundation videos on how to get started building Umbraco sites. They're available from: Introduction for webmasters:...Composite C1 CMS - Open Source on .NET: Composite C1 4.0 (release candidate): Composite C1 4.0 (4.0.4897.31550) (release candidate) Write a review for this release Getting started If you are new to Composite C1 and want to install it: http://docs.composite.net/Getting-started What's new in Composite C1 4.0 The following are highlights of major changes since Composite C1 3.2: General user features: Uploads up to 512MB accepted in the media archive New “Block Selector” in Visual Editor – enable users to create styled div, blockquote etc. elements (not yet availabl...New ProjectsApiDoc: ApiDoc is a library for creating your own API documentation similar to the MSDN directly from your assembly and /// Xml comments without source code.Associativy Internal Link Graph Builder: Orchard module for automatically creating Associativy graphs (http://associativy.com/) from internal links.Azure Business App Scale Proof of Concepts: This is actually a series of proof of concept demo applications built to demonstrate scale of particular architecture or application designs. Badr: .Net Web Framework: Simple, Database-driven, Multiplatform, .Net web frameworkCalcolo di Integrali con approssimazioni: Integrali Metodo dei Trapezi Metodo dei Rettangoli Metodo delle Parabole Metodo di Montecarlo Integralsconfiguration: a full function configuration system based on .netCron Expression Descriptor: "Translate" a Cron Expression in a human readable format. Support databinding, and creation of the expression and Quartz.NET jobs schedulerDaphne Web Edition: The Daphne Web Edition of software for professional checkers players running in a browser.Entity framework T4 NHibernate mapping generator: This project contains T4 templates to create POCOs + NHibernate mappings from an Entity Framework Model (.edmx).EWS Streaming Notification Sample Application: Sample application showing how to handle multiple subscriptions using streaming notifications, specifically for Exchange 2013 (or Wave 15 of Office 365).EXACT_EXTENSION: This is project to support Account System in International SchoolsIPS Training: Playing around with Joe to teach some programming.jean0604wordpressmercurial: dfdafdaLenic.DI: Lenic.DI -- Another IOC Container Library Using DelegateManage Azure Srevice: This project is a windows form application to manage azure service and deployments.Microsoft dot Net Lab: This project concentrates into a Lab a large web project based on Microsoft Best Practices and info on “what works” and more what should be avoided in Prod env.Miris Human Milk Analyser: This is a DEMO project ONLYMultilingual call each other: Multilingual call each othernBlade: A Dependency Injection Container.nChart: A JavaScript Chart Library Base on D3.jsnCMS: A Content Management System.nReport: A JavaScript Report Library.nTemplate: A Template Engine.searchLocal: search localT4 Unit Test Constructor: T4 Unit Test Constructor is a Text Transform file that generates complete Unit Testing project based on siblings projects inside a solution.v3r137: m0l3cUL4r dyN4MiC2 5iMul47i0N u5IN9 V3Rl37 iN739r47I0N. vi5U4li23 p4R7ICL32 8y 0P3n9l P0iN7 5prI73. U23 0P3NMp 4nd 0p3NCl f0r 5p33D.Virtual Sport for Sport Team: This is the website to manage a sports team. Through this website you can manage the members of the team, from players to staff, schedules, and more. and for suWindows Azure MultiSite Role: This web role allows you to host multiple websites on the same VM instances, and synchronising the files automatically.WuvOverlay: An overlay for the game Guild Wars 2 to display useful information obtained from the public API.XYZ: XYZ

    Read the article

  • CodePlex Daily Summary for Wednesday, December 29, 2010

    CodePlex Daily Summary for Wednesday, December 29, 2010Popular ReleasesDocX: DocX v1.0.0.11: Building Examples projectTo build the Examples project, download DocX.dll and add it as a reference to the project. OverviewThis version of DocX contains many bug fixes, it is a serious step towards a stable release. Added1) Unit testing project, 2) Examples project, 3) To many bug fixes to list here, see the source code change list history.Cosmos (C# Open Source Managed Operating System): 71406: This is the second release supporting the full line of Visual Studio 2010 editions. Changes since release 71246 include: Debug info is now stored in a single .cpdb file (which is a Firebird database) Keyboard input works now (using Console.ReadLine) Console colors work (using Console.ForegroundColor and .BackgroundColor)AutoLoL: AutoLoL v1.5.0: Added the all new Masteries Browser which replaces the Quick Open combobox AutoLoL will now attemt to create file associations for mastery (*.lolm) files Each Mastery Build can now contain keywords that the Masteries Browser will use for filtering Changed the way AutoLoL detects if another instance is already running Changed the format of the mastery files to allow more information stored in* Dialogs will now focus the Ok or Cancel button which allows the user to press Return to clo...Confree for Outlook: Confree for Outlook 1.0: Confree for OutlookFeaturesCreate a Claro/Telmex conference directly from Outlook (only works for Argentina). NotesBy now, only works with Claro/Telmex Argentina (if you are using http://simon.telmex.net.ar, we are good).Paint.NET PSD Plugin: 1.6.0: Handling of layer masks has been greatly improved. Improved reliability. Many PSD files that previously loaded in as garbage will now load in correctly. Parallelized loading. PSD files containing layer masks will load in a bit quicker thanks to the removal of the sequential bottleneck. Hidden layers are no longer made visible on save. Many thanks to the users who helped expose the layer masks problem: Rob Horowitz, M_Lyons10. Please keep sending in those bug reports and PSD repro files!Razor Templating Engine: Razor Templating Engine v1.2: Changes: ADDED: Standard namespaces imports for all templates: System, System.Collections.Generic, System.Linq (Changeset 5635) ADDED: Methods for Precompilation (Changeset 3283) CHANGED: Refactored precompilation to be exposed per-TemplateService. (Changeset 3440) CHANGED: Added more descriptive compilation exception message. (Changeset 3629) FIXED: Forced reference to Microsoft.CSharp to correct support for testing frameworks. (Changeset 3689) FIXED: Added support for nested anonymous obj...PhysicalMeasure C# library: PhysicalMeasure 1.0 Release 2010-12-28: PhysicalMeasure 1.0 Release 2010-12-28Facebook C# SDK: 4.1.1: From 4.1.1 Release: Authentication bug fix caused by facebook change (error with redirects in Safari) Authenticator fix, always returning true From 4.1.0 Release Lots of bug fixes Removed Dynamic Runtime Language dependencies from non-dynamic platforms. Samples included in release for ASP.NET, MVC, Silverlight, Windows Phone 7, WPF, WinForms, and one Visual Basic Sample Changed internal serialization to use Json.net BREAKING CHANGE: Canvas Session is no longer supported. Use Signed...MonitorWang: MonitorWang v1.0.5 (Growler): What's new?Added Growl Notification Finalisers - these are interceptor components that work exclusively with the Growl Publisher. These allow you to modify the Growl Notification just prior to it being sent by the publisher. You can inject custom logic to precisely control how the Growl Notification will appear; this includes changing the Growl Priority level and message text. I've created to two Growl Notification Finalisers - one allows you to change the Growl Notification Priorty based on ...Catel - WPF and Silverlight MVVM library: 1.0.0: And there it is, the final release of Catel, and it is no longer a beta version!EnhSim: EnhSim 2.2.7 ALPHA: 2.2.7 ALPHAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Mongoose has bee...LINQ to Twitter: LINQ to Twitter Beta v2.0.19: Mono 2.8, Silverlight, OAuth, 100% Twitter API coverage, streaming, extensibility via Raw Queries, and added documentation. Bug fixes.Rocket Framework (.Net 4.0): Rocket Framework for Windows V 1.0.0: Architecture is reviewed and adjusted in a way so that I can introduce the Web version and WPF version of this framework next. - Rocket.Core is introduced - Controller button functions revisited and updated - DB is renewed to suite the implemented features - Create New button functionality is changed - Add Question Handling featuresFlickr Wallpaper Rotator (for Windows desktop): Wallpaper Flickr 1.1: Some minor bugfixes (mostly covering when network connection is flakey, so I discovered them all while at my parents' house for Christmas).NoSimplerAccounting: NoSimplerAccounting 6.0: -Fixed a bug in expense category report.NHibernate Mapping Generator: NHibernate Mapping Generator 2.0: Added support for Postgres (Thanks to Angelo)NewLife XCode: XCode v6.5.2010.1223 ????(????v3.5??): XCode v6.5.2010.1223 ????,??: NewLife.Core ??? NewLife.Net ??? XControl ??? XTemplate ????,??C#?????? XAgent ???? NewLife.CommonEnitty ??????(???,XCode??????) XCode?? ?????????,??????????????????,?????95% XCode v3.5.2009.0714 ??,?v3.5?v6.0???????????????,?????????。v3.5???????????,??????????????。 XCoder ??XTemplate?????????,????????XCode??? XCoder_Src ???????(????XTemplate????),??????????????????VivoSocial: VivoSocial 7.4.0: Please see changes: http://support.vivoware.com/project/ChangeLog.aspx?PROJID=48Umbraco CMS: Umbraco 4.6 Beta - codename JUNO: The Umbraco 4.6 beta (codename JUNO) release contains many new features focusing on an improved installation experience, a number of robust developer features, and contains more than 89 bug fixes since the 4.5.2 release. Improved installer experience Updated Starter Kits (Simple, Blog, Personal, Business) Beautiful, free, customizable skins included Skinning engine and Skin customization (see Skinning Documentation Kit) Default dashboards on install with hide option Updated Login t...ASP.NET MVC SiteMap provider: MvcSiteMapProvider 2.3.0: Using NuGet?MvcSiteMapProvider is also listed in the NuGet feed. Learn more... Like the project? Consider a donation!Donate via PayPal via PayPal. Release notesThis will be the last release targeting ASP.NET MVC 2 and .NET 3.5. MvcSiteMapProvider 3.0.0 will be targeting ASP.NET MVC 3 and .NET 4 Web.config setting skipAssemblyScanOn has been deprecated in favor of excludeAssembliesForScan and includeAssembliesForScan ISiteMapNodeUrlResolver is now completely responsible for generating th...New Projects42 Sight Holistic Data Warehousing on Microsoft SQL Server 2008: This project is the Holistic DW Template. Included are the 3 MS XL spreadsheet reporting models as described in the book Holistic Data Warehousing. This Template is multi-purpose and requires no modification other than you populating it according to the book examplesAndshev.OrmTools: Tools for generating SQL queries using LINQ. ORM framework is also planned to be implemented.BFG and Dice Roller: This solution will contain 2 concepts, the Dice Roller with hooks for extension into other applications, as well as a Personal Battlefleet Gothic Helper application to help manage, control, and smoothe the flow of BFG games. The BFG application will target the Phone 7 platform.BlondieODOS: its an osBluehat Community projetcts: Bluehat Community projetcts.C# - DBWrapper MySQL: Abro aqui a discussão para criarmos uma classe para manipular dados em MySQL.CardGame for Education: This project goal is to create a small Card Game and to teach the team members how to design, implement and how to use "real world" tools. Englishlearning: My test project!Faran Silver: Silverlight Controls Developed till now. Include: 1- Persian Date Input 2- Close-able Tab Item 3- Shamsi Date Convertergocodelibrary: This is my project.Id3Stuff: Project that throws together some nice id3 mass editing functionsInfoStrat.OakFx: OakFx by InfoStratinivi: iniviJJODESK: This little utility software written in Vb.net 2.0, allows you to : - switch IE proxy, - manage IP, - launch website, - launch other utilty tools , - launch and manage cmd, bat, vbs script. - and much more ..... First I wrote this tool for my personal use, because I work with several networks, and customer sites. I think many other software developpers have the same need. All parameters are customisable, and are store in an ini text file. And it can run on an USB stick. learn MVC: Just keeping all source code in one place. Thats all folksLJF.Utility: Utility?????Lucidity: Lucid WILDMixModes Synergy - The WPF Toolkit: MixModes Synergy 2010 is a complete toolset that simplifies user interface development in Windows Presentation Foundation based applications. Synergy includes rich user interface controls providing professional look and feel for your applications out of the box.Multi Targetted Rss Reader: A Multi Targetted Rss Reader demo that shows how to multi target your view model across different screens (WP7, Silverlight, WPF, Surface).Secure Batch Command Tool For Administrators: A command line tool that will help you to protect your passwords that you should provide in your batch files. This tool encrypt Dos commands and update the batch files in a way that keeps your batch file maintainable and easy to use.SharePoint 2010 Conditional Lookup Control: I created a "Conditional Lookup" control for SharePoint 2010. With this you can connect two lookup fields controls on list forms to fill the second control with values depending on the selected value in the first control. There is a demo inside.sql dot: Ease when making a connection to the sql serverwolisbetter: WOL is better!WP7PasswordsBackup: backup/sync client for windows phone 7 passwords app. ??????: ??????

    Read the article

  • Big GRC: Turning Data into Actionable GRC Intelligence

    - by Jenna Danko
    While it’s no longer headline news that Governments have carried out large scale data-mining programmes aimed at terrorism detection and identifying other patterns of interest across a wide range of digital data sources, the debate over the ethics and justification over this action, will clearly continue for some time to come. What is becoming clear is that these programmes are a framework for the collation and aggregation of massive amounts of unstructured data and from this, the creation of actionable intelligence from analyses that allowed the analysts to explore and extract a variety of patterns and then direct resources. This data included audio and video chats, phone calls, photographs, e-mails, documents, internet searches, social media posts and mobile phone logs and connections. Although Governance, Risk and Compliance (GRC) professionals are not looking at the implementation of such programmes, there are many similar GRC “Big data” challenges to be faced and potential lessons to be learned from these high profile government programmes that can be applied a lot closer to home. For example, how can GRC professionals collect, manage and analyze an enormous and disparate volume of data to create and manage their own actionable intelligence covering hidden signs and patterns of criminal activity, the early or retrospective, violation of regulations/laws/corporate policies and procedures, emerging risks and weakening controls etc. Not exactly the stuff of James Bond to be sure, but it is certainly more applicable to most GRC professional’s day to day challenges. So what is Big Data and how can it benefit the GRC process? Although it often varies, the definition of Big Data largely refers to the following types of data: Traditional Enterprise Data – includes customer information from CRM systems, transactional ERP data, web store transactions, and general ledger data. Machine-Generated /Sensor Data – includes Call Detail Records (“CDR”), weblogs and trading systems data. Social Data – includes customer feedback streams, micro-blogging sites like Twitter, and social media platforms like Facebook. The McKinsey Global Institute estimates that data volume is growing 40% per year, and will grow 44x between 2009 and 2020. But while it’s often the most visible parameter, volume of data is not the only characteristic that matters. In fact, according to sources such as Forrester there are four key characteristics that define big data: Volume. Machine-generated data is produced in much larger quantities than non-traditional data. This is all the data generated by IT systems that power the enterprise. This includes live data from packaged and custom applications – for example, app servers, Web servers, databases, networks, virtual machines, telecom equipment, and much more. Velocity. Social media data streams – while not as massive as machine-generated data – produce a large influx of opinions and relationships valuable to customer relationship management as well as offering early insight into potential reputational risk issues. Even at 140 characters per tweet, the high velocity (or frequency) of Twitter data ensures large volumes (over 8 TB per day) need to be managed. Variety. Traditional data formats tend to be relatively well defined by a data schema and change slowly. In contrast, non-traditional data formats exhibit a dizzying rate of change. Without question, all GRC professionals work in a dynamic environment and as new services, new products, new business lines are added or new marketing campaigns executed for example, new data types are needed to capture the resultant information.  Value. The economic value of data varies significantly. Typically, there is good information hidden amongst a larger body of non-traditional data that GRC professionals can use to add real value to the organisation; the greater challenge is identifying what is valuable and then transforming and extracting that data for analysis and action. For example, customer service calls and emails have millions of useful data points and have long been a source of information to GRC professionals. Those calls and emails are critical in helping GRC professionals better identify hidden patterns and implement new policies that can reduce the amount of customer complaints.   Now on a scale and depth far beyond those in place today, all that unstructured call and email data can be captured, stored and analyzed to reveal the reasons for the contact, perhaps with the aggregated customer results cross referenced against what is being said about the organization or a similar peer organization on social media. The organization can then take positive actions, communicating to the market in advance of issues reaching the press, strengthening controls, adjusting risk profiles, changing policy and procedures and completely minimizing, if not eliminating, complaints and compensation for that specific reason in the future. In this one example of many similar ones, the GRC team(s) has demonstrated real and tangible business value. Big Challenges - Big Opportunities As pointed out by recent Forrester research, high performing companies (those that are growing 15% or more year-on-year compared to their peers) are taking a selective approach to investing in Big Data.  "Tomorrow's winners understand this, and they are making selective investments aimed at specific opportunities with tangible benefits where big data offers a more economical solution to meet a need." (Forrsights Strategy Spotlight: Business Intelligence and Big Data, Q4 2012) As pointed out earlier, with the ever increasing volume of regulatory demands and fines for getting it wrong, limited resource availability and out of date or inadequate GRC systems all contributing to a higher cost of compliance and/or higher risk profile than desired – a big data investment in GRC clearly falls into this category. However, to make the most of big data organizations must evolve both their business and IT procedures, processes, people and infrastructures to handle these new high-volume, high-velocity, high-variety sources of data and be able integrate them with the pre-existing company data to be analyzed. GRC big data clearly allows the organization access to and management over a huge amount of often very sensitive information that although can help create a more risk intelligent organization, also presents numerous data governance challenges, including regulatory compliance and information security. In addition to client and regulatory demands over better information security and data protection the sheer amount of information organizations deal with the need to quickly access, classify, protect and manage that information can quickly become a key issue  from a legal, as well as technical or operational standpoint. However, by making information governance processes a bigger part of everyday operations, organizations can make sure data remains readily available and protected. The Right GRC & Big Data Partnership Becomes Key  The "getting it right first time" mantra used in so many companies remains essential for any GRC team that is sponsoring, helping kick start, or even overseeing a big data project. To make a big data GRC initiative work and get the desired value, partnerships with companies, who have a long history of success in delivering successful GRC solutions as well as being at the very forefront of technology innovation, becomes key. Clearly solutions can be built in-house more cheaply than through vendor, but as has been proven time and time again, when it comes to self built solutions covering AML and Fraud for example, few have able to scale or adapt appropriately to meet the changing regulations or challenges that the GRC teams face on a daily basis. This has led to the creation of GRC silo’s that are causing so many headaches today. The solutions that stand out and should be explored are the ones that can seamlessly merge the traditional world of well-known data, analytics and visualization with the new world of seemingly innumerable data sources, utilizing Big Data technologies to generate new GRC insights right across the enterprise.Ultimately, Big Data is here to stay, and organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be the ones that are well positioned to make the most of it. A Blueprint and Roadmap Service for Big Data Big data adoption is first and foremost a business decision. As such it is essential that your partner can align your strategies, goals, and objectives with an architecture vision and roadmap to accelerate adoption of big data for your environment, as well as establish practical, effective governance that will maintain a well managed environment going forward. Key Activities: While your initiatives will clearly vary, there are some generic starting points the team and organization will need to complete: Clearly define your drivers, strategies, goals, objectives and requirements as it relates to big data Conduct a big data readiness and Information Architecture maturity assessment Develop future state big data architecture, including views across all relevant architecture domains; business, applications, information, and technology Provide initial guidance on big data candidate selection for migrations or implementation Develop a strategic roadmap and implementation plan that reflects a prioritization of initiatives based on business impact and technology dependency, and an incremental integration approach for evolving your current state to the target future state in a manner that represents the least amount of risk and impact of change on the business Provide recommendations for practical, effective Data Governance, Data Quality Management, and Information Lifecycle Management to maintain a well-managed environment Conduct an executive workshop with recommendations and next steps There is little debate that managing risk and data are the two biggest obstacles encountered by financial institutions.  Big data is here to stay and risk management certainly is not going anywhere, and ultimately financial services industry organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be best positioned to make the most of it. Matthew Long is a Financial Crime Specialist for Oracle Financial Services. He can be reached at matthew.long AT oracle.com.

    Read the article

  • Reference Data Management

    - by rahulkamath
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableColorfulListAccent2 {mso-style-name:"Colorful List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:72; mso-style-unhide:no; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-tstyle-shading:#F8EDED; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:25; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; color:black; mso-themecolor:text1;} table.MsoTableColorfulListAccent2FirstRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#9E3A38; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themeshade:204; mso-tstyle-border-bottom:1.5pt solid white; mso-tstyle-border-bottom-themecolor:background1; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:white; mso-tstyle-shading-themecolor:background1; mso-tstyle-border-top:1.5pt solid black; mso-tstyle-border-top-themecolor:text1; color:#9E3A38; mso-themecolor:accent2; mso-themeshade:204; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2FirstCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2OddColumn {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#EFD3D2; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:63; mso-tstyle-border-top:cell-none; mso-tstyle-border-left:cell-none; mso-tstyle-border-bottom:cell-none; mso-tstyle-border-right:cell-none; mso-tstyle-border-insideh:cell-none; mso-tstyle-border-insidev:cell-none;} table.MsoTableColorfulListAccent2OddRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#F2DBDB; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:51;} Reference Data Management Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise MDM solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or mastering sales territories in light of rapid fire acquisitions that require frequent sales territory refinement, equitable distribution of leads and accounts to salespersons, and alignment of budget/forecast with results to optimize sales coverage. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? Reference data is a close cousin of master data. While master data may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and give them contextual value. The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Specialty Finance: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change.

    Read the article

  • Sent Item code in java

    - by Farhan Khan
    I need urgent help, if any one can resolve my issue it will be very highly appriciated. I have create a SMS composer on jAVA netbians its on urdu language. the probelm is its not saving sent sms on Sent items.. i have tried my best to make the code but failed. Tomorrow is my last day to present the code on university please help me please below is the code that i have made till now. Please any one.... /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package newSms; import javax.microedition.io.Connector; import javax.microedition.midlet.*; import javax.microedition.lcdui.*; import javax.wireless.messaging.MessageConnection; import javax.wireless.messaging.TextMessage; import org.netbeans.microedition.util.SimpleCancellableTask; /** * @author AHTISHAM */ public class composeurdu extends MIDlet implements CommandListener, ItemCommandListener, ItemStateListener { private boolean midletPaused = false; private boolean isUrdu; String numb=" "; Alert alert; //<editor-fold defaultstate="collapsed" desc=" Generated Fields ">//GEN-BEGIN:|fields|0| private Form form; private TextField number; private TextField textUrdu; private StringItem stringItem; private StringItem send; private Command exit; private Command sendMesg; private Command add; private Command urdu; private Command select; private SimpleCancellableTask task; //</editor-fold>//GEN-END:|fields|0| MessageConnection clientConn; private Display display; public composeurdu() { display = Display.getDisplay(this); } private void showMessage(){ display=Display.getDisplay(this); //numb=number.getString(); if(number.getString().length()==0 || textUrdu.getString().length()==0){ Alert alert=new Alert("error "); alert.setString(" Enter phone number"); alert.setTimeout(5000); display.setCurrent(alert); } else if(number.getString().length()>11){ Alert alert=new Alert("error "); alert.setString("invalid number"); alert.setTimeout(5000); display.setCurrent(alert); } else{ Alert alert=new Alert("error "); alert.setString("success"); alert.setTimeout(5000); display.setCurrent(alert); } } void showMessage(String message, Displayable displayable) { Alert alert = new Alert(""); alert.setTitle("Error"); alert.setString(message); alert.setType(AlertType.ERROR); alert.setTimeout(5000); display.setCurrent(alert, displayable); } //<editor-fold defaultstate="collapsed" desc=" Generated Methods ">//GEN-BEGIN:|methods|0| //</editor-fold>//GEN-END:|methods|0| //<editor-fold defaultstate="collapsed" desc=" Generated Method: initialize ">//GEN-BEGIN:|0-initialize|0|0-preInitialize /** * Initializes the application. It is called only once when the MIDlet is * started. The method is called before the * <code>startMIDlet</code> method. */ private void initialize() {//GEN-END:|0-initialize|0|0-preInitialize // write pre-initialize user code here //GEN-LINE:|0-initialize|1|0-postInitialize // write post-initialize user code here }//GEN-BEGIN:|0-initialize|2| //</editor-fold>//GEN-END:|0-initialize|2| //<editor-fold defaultstate="collapsed" desc=" Generated Method: startMIDlet ">//GEN-BEGIN:|3-startMIDlet|0|3-preAction /** * Performs an action assigned to the Mobile Device - MIDlet Started point. */ public void startMIDlet() {//GEN-END:|3-startMIDlet|0|3-preAction // write pre-action user code here switchDisplayable(null, getForm());//GEN-LINE:|3-startMIDlet|1|3-postAction // write post-action user code here form.setCommandListener(this); form.setItemStateListener(this); }//GEN-BEGIN:|3-startMIDlet|2| //</editor-fold>//GEN-END:|3-startMIDlet|2| //<editor-fold defaultstate="collapsed" desc=" Generated Method: resumeMIDlet ">//GEN-BEGIN:|4-resumeMIDlet|0|4-preAction /** * Performs an action assigned to the Mobile Device - MIDlet Resumed point. */ public void resumeMIDlet() {//GEN-END:|4-resumeMIDlet|0|4-preAction // write pre-action user code here //GEN-LINE:|4-resumeMIDlet|1|4-postAction // write post-action user code here }//GEN-BEGIN:|4-resumeMIDlet|2| //</editor-fold>//GEN-END:|4-resumeMIDlet|2| //<editor-fold defaultstate="collapsed" desc=" Generated Method: switchDisplayable ">//GEN-BEGIN:|5-switchDisplayable|0|5-preSwitch /** * Switches a current displayable in a display. The * <code>display</code> instance is taken from * <code>getDisplay</code> method. This method is used by all actions in the * design for switching displayable. * * @param alert the Alert which is temporarily set to the display; if * <code>null</code>, then * <code>nextDisplayable</code> is set immediately * @param nextDisplayable the Displayable to be set */ public void switchDisplayable(Alert alert, Displayable nextDisplayable) {//GEN-END:|5-switchDisplayable|0|5-preSwitch // write pre-switch user code here Display display = getDisplay();//GEN-BEGIN:|5-switchDisplayable|1|5-postSwitch if (alert == null) { display.setCurrent(nextDisplayable); } else { display.setCurrent(alert, nextDisplayable); }//GEN-END:|5-switchDisplayable|1|5-postSwitch // write post-switch user code here }//GEN-BEGIN:|5-switchDisplayable|2| //</editor-fold>//GEN-END:|5-switchDisplayable|2| //<editor-fold defaultstate="collapsed" desc=" Generated Method: commandAction for Displayables ">//GEN-BEGIN:|7-commandAction|0|7-preCommandAction /** * Called by a system to indicated that a command has been invoked on a * particular displayable. * * @param command the Command that was invoked * @param displayable the Displayable where the command was invoked */ public void commandAction(Command command, Displayable displayable) {//GEN-END:|7-commandAction|0|7-preCommandAction // write pre-action user code here if (displayable == form) {//GEN-BEGIN:|7-commandAction|1|16-preAction if (command == exit) {//GEN-END:|7-commandAction|1|16-preAction // write pre-action user code here exitMIDlet();//GEN-LINE:|7-commandAction|2|16-postAction // write post-action user code here } else if (command == sendMesg) {//GEN-LINE:|7-commandAction|3|18-preAction // write pre-action user code here String mno=number.getString(); String msg=textUrdu.getString(); if(mno.equals("")) { alert = new Alert("Alert"); alert.setString("Enter Mobile Number!!!"); alert.setTimeout(2000); display.setCurrent(alert); } else { try { clientConn=(MessageConnection)Connector.open("sms://"+mno); } catch(Exception e) { alert = new Alert("Alert"); alert.setString("Unable to connect to Station because of network problem"); alert.setTimeout(2000); display.setCurrent(alert); } try { TextMessage textmessage = (TextMessage) clientConn.newMessage(MessageConnection.TEXT_MESSAGE); textmessage.setAddress("sms://"+mno); textmessage.setPayloadText(msg); clientConn.send(textmessage); } catch(Exception e) { Alert alert=new Alert("Alert","",null,AlertType.INFO); alert.setTimeout(Alert.FOREVER); alert.setString("Unable to send"); display.setCurrent(alert); } } //GEN-LINE:|7-commandAction|4|18-postAction // write post-action user code here }//GEN-BEGIN:|7-commandAction|5|7-postCommandAction }//GEN-END:|7-commandAction|5|7-postCommandAction // write post-action user code here }//GEN-BEGIN:|7-commandAction|6| //</editor-fold>//GEN-END:|7-commandAction|6| //<editor-fold defaultstate="collapsed" desc=" Generated Method: commandAction for Items ">//GEN-BEGIN:|8-itemCommandAction|0|8-preItemCommandAction /** * Called by a system to indicated that a command has been invoked on a * particular item. * * @param command the Command that was invoked * @param displayable the Item where the command was invoked */ public void commandAction(Command command, Item item) {//GEN-END:|8-itemCommandAction|0|8-preItemCommandAction // write pre-action user code here if (item == number) {//GEN-BEGIN:|8-itemCommandAction|1|21-preAction if (command == add) {//GEN-END:|8-itemCommandAction|1|21-preAction // write pre-action user code here //GEN-LINE:|8-itemCommandAction|2|21-postAction // write post-action user code here }//GEN-BEGIN:|8-itemCommandAction|3|28-preAction } else if (item == send) { if (command == select) {//GEN-END:|8-itemCommandAction|3|28-preAction // write pre-action user code here //GEN-LINE:|8-itemCommandAction|4|28-postAction // write post-action user code here }//GEN-BEGIN:|8-itemCommandAction|5|24-preAction } else if (item == textUrdu) { if (command == urdu) {//GEN-END:|8-itemCommandAction|5|24-preAction // write pre-action user code here if (isUrdu) isUrdu = false; else { isUrdu = true; TextField tf = (TextField)item; } //GEN-LINE:|8-itemCommandAction|6|24-postAction // write post-action user code here }//GEN-BEGIN:|8-itemCommandAction|7|8-postItemCommandAction }//GEN-END:|8-itemCommandAction|7|8-postItemCommandAction // write post-action user code here }//GEN-BEGIN:|8-itemCommandAction|8| //</editor-fold>//GEN-END:|8-itemCommandAction|8| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: form ">//GEN-BEGIN:|14-getter|0|14-preInit /** * Returns an initialized instance of form component. * * @return the initialized component instance */ public Form getForm() { if (form == null) {//GEN-END:|14-getter|0|14-preInit // write pre-init user code here form = new Form("form", new Item[]{getNumber(), getTextUrdu(), getStringItem(), getSend()});//GEN-BEGIN:|14-getter|1|14-postInit form.addCommand(getExit()); form.addCommand(getSendMesg()); form.setCommandListener(this);//GEN-END:|14-getter|1|14-postInit // write post-init user code here form.setItemStateListener(this); // form.setCommandListener(this); }//GEN-BEGIN:|14-getter|2| return form; } //</editor-fold>//GEN-END:|14-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: number ">//GEN-BEGIN:|19-getter|0|19-preInit /** * Returns an initialized instance of number component. * * @return the initialized component instance */ public TextField getNumber() { if (number == null) {//GEN-END:|19-getter|0|19-preInit // write pre-init user code here number = new TextField("Number ", null, 11, TextField.PHONENUMBER);//GEN-BEGIN:|19-getter|1|19-postInit number.addCommand(getAdd()); number.setItemCommandListener(this); number.setDefaultCommand(getAdd());//GEN-END:|19-getter|1|19-postInit // write post-init user code here }//GEN-BEGIN:|19-getter|2| return number; } //</editor-fold>//GEN-END:|19-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: textUrdu ">//GEN-BEGIN:|22-getter|0|22-preInit /** * Returns an initialized instance of textUrdu component. * * @return the initialized component instance */ public TextField getTextUrdu() { if (textUrdu == null) {//GEN-END:|22-getter|0|22-preInit // write pre-init user code here textUrdu = new TextField("Message", null, 2000, TextField.ANY);//GEN-BEGIN:|22-getter|1|22-postInit textUrdu.addCommand(getUrdu()); textUrdu.setItemCommandListener(this);//GEN-END:|22-getter|1|22-postInit // write post-init user code here }//GEN-BEGIN:|22-getter|2| return textUrdu; } //</editor-fold>//GEN-END:|22-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: exit ">//GEN-BEGIN:|15-getter|0|15-preInit /** * Returns an initialized instance of exit component. * * @return the initialized component instance */ public Command getExit() { if (exit == null) {//GEN-END:|15-getter|0|15-preInit // write pre-init user code here exit = new Command("Exit", Command.EXIT, 0);//GEN-LINE:|15-getter|1|15-postInit // write post-init user code here }//GEN-BEGIN:|15-getter|2| return exit; } //</editor-fold>//GEN-END:|15-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: sendMesg ">//GEN-BEGIN:|17-getter|0|17-preInit /** * Returns an initialized instance of sendMesg component. * * @return the initialized component instance */ public Command getSendMesg() { if (sendMesg == null) {//GEN-END:|17-getter|0|17-preInit // write pre-init user code here sendMesg = new Command("send", Command.OK, 0);//GEN-LINE:|17-getter|1|17-postInit // write post-init user code here }//GEN-BEGIN:|17-getter|2| return sendMesg; } //</editor-fold>//GEN-END:|17-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: add ">//GEN-BEGIN:|20-getter|0|20-preInit /** * Returns an initialized instance of add component. * * @return the initialized component instance */ public Command getAdd() { if (add == null) {//GEN-END:|20-getter|0|20-preInit // write pre-init user code here add = new Command("add", Command.OK, 0);//GEN-LINE:|20-getter|1|20-postInit // write post-init user code here }//GEN-BEGIN:|20-getter|2| return add; } //</editor-fold>//GEN-END:|20-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: urdu ">//GEN-BEGIN:|23-getter|0|23-preInit /** * Returns an initialized instance of urdu component. * * @return the initialized component instance */ public Command getUrdu() { if (urdu == null) {//GEN-END:|23-getter|0|23-preInit // write pre-init user code here urdu = new Command("urdu", Command.OK, 0);//GEN-LINE:|23-getter|1|23-postInit // write post-init user code here }//GEN-BEGIN:|23-getter|2| return urdu; } //</editor-fold>//GEN-END:|23-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: stringItem ">//GEN-BEGIN:|25-getter|0|25-preInit /** * Returns an initialized instance of stringItem component. * * @return the initialized component instance */ public StringItem getStringItem() { if (stringItem == null) {//GEN-END:|25-getter|0|25-preInit // write pre-init user code here stringItem = new StringItem("string", null);//GEN-LINE:|25-getter|1|25-postInit // write post-init user code here }//GEN-BEGIN:|25-getter|2| return stringItem; } //</editor-fold>//GEN-END:|25-getter|2| //</editor-fold> //<editor-fold defaultstate="collapsed" desc=" Generated Getter: send ">//GEN-BEGIN:|26-getter|0|26-preInit /** * Returns an initialized instance of send component. * * @return the initialized component instance */ public StringItem getSend() { if (send == null) {//GEN-END:|26-getter|0|26-preInit // write pre-init user code here send = new StringItem("", "send", Item.BUTTON);//GEN-BEGIN:|26-getter|1|26-postInit send.addCommand(getSelect()); send.setItemCommandListener(this); send.setDefaultCommand(getSelect());//GEN-END:|26-getter|1|26-postInit // write post-init user code here }//GEN-BEGIN:|26-getter|2| return send; } //</editor-fold>//GEN-END:|26-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: select ">//GEN-BEGIN:|27-getter|0|27-preInit /** * Returns an initialized instance of select component. * * @return the initialized component instance */ public Command getSelect() { if (select == null) {//GEN-END:|27-getter|0|27-preInit // write pre-init user code here select = new Command("select", Command.OK, 0);//GEN-LINE:|27-getter|1|27-postInit // write post-init user code here }//GEN-BEGIN:|27-getter|2| return select; } //</editor-fold>//GEN-END:|27-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: task ">//GEN-BEGIN:|40-getter|0|40-preInit /** * Returns an initialized instance of task component. * * @return the initialized component instance */ public SimpleCancellableTask getTask() { if (task == null) {//GEN-END:|40-getter|0|40-preInit // write pre-init user code here task = new SimpleCancellableTask();//GEN-BEGIN:|40-getter|1|40-execute task.setExecutable(new org.netbeans.microedition.util.Executable() { public void execute() throws Exception {//GEN-END:|40-getter|1|40-execute // write task-execution user code here }//GEN-BEGIN:|40-getter|2|40-postInit });//GEN-END:|40-getter|2|40-postInit // write post-init user code here }//GEN-BEGIN:|40-getter|3| return task; } //</editor-fold>//GEN-END:|40-getter|3| /** * Returns a display instance. * @return the display instance. */ public Display getDisplay () { return Display.getDisplay(this); } /** * Exits MIDlet. */ public void exitMIDlet() { switchDisplayable (null, null); destroyApp(true); notifyDestroyed(); } /** * Called when MIDlet is started. * Checks whether the MIDlet have been already started and initialize/starts or resumes the MIDlet. */ public void startApp() { startMIDlet(); display.setCurrent(form); } /** * Called when MIDlet is paused. */ public void pauseApp() { midletPaused = true; } /** * Called to signal the MIDlet to terminate. * @param unconditional if true, then the MIDlet has to be unconditionally terminated and all resources has to be released. */ public void destroyApp(boolean unconditional) { } public void itemStateChanged(Item item) { if (item == textUrdu) { if (isUrdu) { stringItem.setText("urdu"); TextField tf = (TextField)item; String s = tf.getString(); char ch = s.charAt(s.length() - 1); s = s.substring(0, s.length() - 1); ch = Urdu.ToUrdu(ch); s = s + String.valueOf(ch); tf.setString(""); tf.setString(s); }//end if throw new UnsupportedOperationException("Not supported yet."); } } }

    Read the article

  • Bulk inserting best way to about it? + Helping me understand fully what I found so far

    - by chobo2
    Hi So I saw this post here and read it and it seems like bulk copy might be the way to go. http://stackoverflow.com/questions/682015/whats-the-best-way-to-bulk-database-inserts-from-c I still have some questions and want to know how things actually work. So I found 2 tutorials. http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx First way uses 2 ado.net 2.0 features. BulkInsert and BulkCopy. the second one uses linq to sql and OpenXML. This sort of appeals to me as I am using linq to sql already and prefer it over ado.net. However as one person pointed out in the posts what he just going around the issue at the cost of performance( nothing wrong with that in my opinion) First I will talk about the 2 ways in the first tutorial I am using VS2010 Express, .net 4.0, MVC 2.0, SQl Server 2005 Is ado.net 2.0 the most current version? Based on the technology I am using, is there some updates to what I am going to show that would improve it somehow? Is there any thing that these tutorial left out that I should know about? BulkInsert I am using this table for all the examples. CREATE TABLE [dbo].[TBL_TEST_TEST] ( ID INT IDENTITY(1,1) PRIMARY KEY, [NAME] [varchar](50) ) SP Code USE [Test] GO /****** Object: StoredProcedure [dbo].[sp_BatchInsert] Script Date: 05/19/2010 15:12:47 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_BatchInsert] (@Name VARCHAR(50) ) AS BEGIN INSERT INTO TBL_TEST_TEST VALUES (@Name); END C# Code /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 1000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } So first thing is the batch size. Why would you set a batch size to anything but the number of records you are sending? Like I am sending 500,000 records so I did a Batch size of 500,000. Next why does it crash when I do this? If I set it to 1000 for batch size it works just fine. System.Data.SqlClient.SqlException was unhandled Message="A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)" Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=20 LineNumber=0 Number=233 Server="" State=0 StackTrace: at System.Data.Common.DbDataAdapter.UpdatedRowStatusErrors(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.UpdatedRowStatus(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.Update(DataRow[] dataRows, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.UpdateFromDataTable(DataTable dataTable, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.Update(DataTable dataTable) at TestIQueryable.Program.BatchInsert() in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 124 at TestIQueryable.Program.Main(String[] args) in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 16 InnerException: Time it took to insert 500,000 records with insert batch size of 1000 took "2 mins and 54 seconds" Of course this is no official time I sat there with a stop watch( I am sure there are better ways but was too lazy to look what they where) So I find that kinda slow compared to all my other ones(expect the linq to sql insert one) and I am not really sure why. Next I looked at bulkcopy /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } This one seemed to go really fast and did not even need a SP( can you use SP with bulk copy? If you can would it be better?) BatchCopy had no problem with a 500,000 batch size.So again why make it smaller then the number of records you want to send? I found that with BatchCopy and 500,000 batch size it took only 5 seconds to complete. I then tried with a batch size of 1,000 and it only took 8 seconds. So much faster then the bulkinsert one above. Now I tried the other tutorial. USE [Test] GO /****** Object: StoredProcedure [dbo].[spTEST_InsertXMLTEST_TEST] Script Date: 05/19/2010 15:39:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO TBL_TEST_TEST(NAME) SELECT XMLProdTable.NAME FROM OPENXML(@hDoc, 'ArrayOfTBL_TEST_TEST/TBL_TEST_TEST', 2) WITH ( ID Int, NAME varchar(100) ) XMLProdTable EXEC sp_xml_removedocument @hDoc C# code. /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } So I like this because I get to use objects even though it is kinda redundant. I don't get how the SP works. Like I don't get the whole thing. I don't know if OPENXML has some batch insert under the hood but I do not even know how to take this example SP and change it to fit my tables since like I said I don't know what is going on. I also don't know what would happen if the object you have more tables in it. Like say I have a ProductName table what has a relationship to a Product table or something like that. In linq to sql you could get the product name object and make changes to the Product table in that same object. So I am not sure how to take that into account. I am not sure if I would have to do separate inserts or what. The time was pretty good for 500,000 records it took 52 seconds The last way of course was just using linq to do it all and it was pretty bad. /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } I did only 50,000 records and that took over a minute to do. So I really narrowed it done to the linq to sql bulk insert way or bulk copy. I am just not sure how to do it when you have relationship for either way. I am not sure how they both stand up when doing updates instead of inserts as I have not gotten around to try it yet. I don't think I will ever need to insert/update more than 50,000 records at one type but at the same time I know I will have to do validation on records before inserting so that will slow it down and that sort of makes linq to sql nicer as your got objects especially if your first parsing data from a xml file before you insert into the database. Full C# code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Serialization; using System.Data; using System.Data.SqlClient; namespace TestIQueryable { class Program { private static string connectionString = ""; static void Main(string[] args) { BatchInsert(); Console.WriteLine("done"); } /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 500000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } private static DataTable GetDataTable() { // You First need a DataTable and have all the insert values in it DataTable dtInsertRows = new DataTable(); dtInsertRows.Columns.Add("NAME"); for (int i = 0; i < 500000; i++) { DataRow drInsertRow = dtInsertRows.NewRow(); string name = "Name : " + i; drInsertRow["NAME"] = name; dtInsertRows.Rows.Add(drInsertRow); } return dtInsertRows; } static void sbc_SqlRowsCopied(object sender, SqlRowsCopiedEventArgs e) { Console.WriteLine("Number of records affected : " + e.RowsCopied.ToString()); } } }

    Read the article

  • Aggregate SharePoint Event/Items into your Calendar view using Calendar Overlay

    - by eJugnoo
    One of the most common features I have seen in common use for SharePoint (prior to 2010) in Intranet environments for Team site is Calendar’s. Not only the Calendar list type, but also the ability to add a Calendar view to any list that has the desired columns to construct a Calendar – such as Start, End, Title etc. While this was all great for a single site/calendar, the problem of having to track numerous calendar’s remained. With introduction of Outlook 2007 bi-directional integration with SharePoint, and particularly the ability of Outlook to overlay calendar helped bridge the gap. Now one could connect to number of team sites, and setup Calendar overlays in Outlook using varying colours, to easily identify event source and yet benefit from the plotting of events on single Calendar view. This was all good, but each user in your Enterprise was supposed to setup in a “pull” fashion. This is good for flexibility, not so good when you need to “push” consistency and productivity (re-use). So, what was missing on SharePoint is the ability to have server-side overlay’s that everyone can see – in a single place, aggregating multiple sources. Until SharePoint 2010 arrived! Calendars Overlay in SharePoint 2010 There are Calendar lists and Calendar views. View can be created for almost all lists, as far as you have desired column’s in a list like Start, End, Title etc. to be able to describe and plot an item in a Calendar format. In SharePoint 2010, create a new Calendar list. Go to Calendar ribbon tab, and click Calendar Overlay. You get the screen with list of existing Overlay’s associated with current Calendar (list – in our case). Click on “New Calendar”… Notice the breadcrumb! You are adding Overlay to existing list (Team Calendar – in our case). You have choice of “pulling” Calendar info from an existing Calendar (list/view) in SharePoint or even from Exchange! Set standard info like a name, description and decide the colour you want for the items in aggregated Calendar overlay. Select the source site/list/view, anywhere in farm. When you select Exchange as source of Calendar, you get option to add OWA and Exchange Web Service url. I will cover details of connecting with Exchange in another post, and focus on Overlay’s with SharePoint for this one. Once you have added a new Calendar overlay to existing Calendar veiw, you get something like below for Day view, Week view, and Month view respectively Notice the Overlay colours: Now, if you decide to connect this Calendar to Outlook to sync the items, it will only sync items from main view, and not from Overlay source. So such Overlay of calendar’s is server-side aggregation only. That increases my curiosity, so I try adding the Calendar list view as a web-part on a new page. As you see, this instance of view didn’t include item from source that we had added to default Calendar view. This is – probably – due to the fact that this is a new web-part view for the page. If you want to add overlay to this one, you have to redo that from Ribbon. This also means, subject to purpose and context you get the flexibility to decide what overlay is suited. Also you can only add 10 Overlay’s to an existing view instance. Conclusion Calendar Overlay is clearly a very useful feature that fills a gap of not being able to aggregate information from multiple sources into a Calendar view within context of current items. Source of items can be existing SharePoint calendar views on any site, or even Exchange (via OWA/Exchange web services). List type for source doesn’t matter, it just need a Calendar view type available. You can have 10 overlays. Overlays are for the specific view only, and are server-side only – which means they do not get synced in Outlook. While you can drag-drop current list items, you cannot edit overlay items as they are read-only within scope of current Calendar view. You can of course click on source Overlay item to edit at the source. I’d like to hear, how you think Overlay’s will help you in your case, or how you are already using them... Enjoy SharePoint! --Sharad

    Read the article

  • SQL SERVER – History of SQL Server Database Encryption

    - by pinaldave
    I recently met Michael Coles and Rodeney Landrum the author of one of the kind book Expert SQL Server 2008 Encryption at SQLPASS in Seattle. During the conversation we ended up how Microsoft is evolving encryption technology. The same discussion lead to talking about history of encryption tools in SQL Server. Michale pointed me to page 18 of his book of encryption. He explicitly give me permission to re-produce relevant part of history from his book. Encryption in SQL Server 2000 Built-in cryptographic encryption functionality was nonexistent in SQL Server 2000 and prior versions. In order to get server-side encryption in SQL Server you had to resort to purchasing or creating your own SQL Server XPs. Creating your own cryptographic XPs could be a daunting task owing to the fact that XPs had to be compiled as native DLLs (using a language like C or C++) and the XP application programming interface (API) was poorly documented. In addition there were always concerns around creating wellbehaved XPs that “played nicely” with the SQL Server process. Encryption in SQL Server 2005 Prior to the release of SQL Server 2005 there was a flurry of regulatory activity in response to accounting scandals and attacks on repositories of confidential consumer data. Much of this regulation centered onthe need for protecting and controlling access to sensitive financial and consumer information. With the release of SQL Server 2005 Microsoft responded to the increasing demand for built-in encryption byproviding the necessary tools to encrypt data at the column level. This functionality prominently featured the following: Support for column-level encryption of data using symmetric keys or passphrases. Built-in access to a variety of symmetric and asymmetric encryption algorithms, including AES, DES, Triple DES, RC2, RC4, and RSA. Capability to create and manage symmetric keys. Key creation and management. Ability to generate asymmetric keys and self-signed certificates, or to install external asymmetric keys and certificates. Implementation of hierarchical model for encryption key management, similar to the ANSI X9.17 standard model. SQL functions to generate one-way hash codes and digital signatures, including SHA-1 and MD5 hashes. Additional SQL functions to encrypt and decrypt data. Extensions to the SQL language to support creation, use, and administration of encryption keys and certificates. SQL CLR extensions that provide access to .NET-based encryption functionality. Encryption in SQL Server 2008 Encryption demands have increased over the past few years. For instance, there has been a demand for the ability to store encryption keys “off-the-box,” physically separate from the database and the data it contains. Also there is a recognized requirement for legacy databases and applications to take advantage of encryption without changing the existing code base. To address these needs SQL Server 2008 adds the following features to its encryption arsenal: Transparent Data Encryption (TDE): Allows you to encrypt an entire database, including log files and the tempdb database, in such a way that it is transparent to client applications. Extensible Key Management (EKM): Allows you to store and manage your encryption keys on an external device known as a hardware security module (HSM). Cryptographic random number generation functionality. Additional cryptography-related catalog views and dynamic management views. SQL language extensions to support the new encryption functionality. The encryption book covers all the tools in its various chapter in one simple story. If you are interested how encryption evolved and reached to the stage where it is today, this book is must for everyone. You can read my earlier review of the book over here. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, T SQL, Technology Tagged: Encryption, SQL Server Encryption, SQLPASS

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >