Search Results

Search found 9371 results on 375 pages for 'existing'.

Page 264/375 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • Current State EA: Focus on the Integration!!!

    - by Eric A. Stephens
    A recent project has me at the front end of a large implementation effort covering multiple software components. In addition to the challenges of integrating 15-20 separate and new software components there is the challenge of integrating the portfolio into an existing environment. Like other clients I've worked with and other environments I've worked in for many years, this is typical. The applications are undocumented and under patched leading to a mystery for any architect leading change.  We can boil down most architecture development methodologies (ADM) into first understanding the current/baseline state and then envisioning one or more future states. Many pundits emphasize the need to focus on the future/target states. I agree since enterprise architecture (EA) is about where you are going and not so much where you have been. But to be effective in the future, I contend some focused time needs to be spent on the current state. And specifically on the integration. Integration is always the difficult part of a project (I might put it more coarsely at a cocktail party). While I don't have a case study, my anecdotal experience suggests poorly integrated application portfolios tend to cost more to operate and create entropy when trying to respond to new changes and opportunities. In the aforementioned project, I was able to get one of our EAs assigned to focus on just integration almost immediately. While we're still early in the process, this EA is uncovering all sorts of information that will greatly assist our future state planning for this solution. This information is driving early decision making that we anticipate will accelerate our efforts moving forward. #next_pages_container { width: 5px; hight: 5px; position: absolute; top: -100px; left: -100px; z-index: 2147483647 !important; } #next_pages_container { width: 5px; hight: 5px; position: absolute; top: -100px; left: -100px; z-index: 2147483647 !important; } #next_pages_container { width: 5px; hight: 5px; position: absolute; top: -100px; left: -100px; z-index: 2147483647 !important; } #next_pages_container { width: 5px; hight: 5px; position: absolute; top: -100px; left: -100px; z-index: 2147483647 !important; }

    Read the article

  • How to manage a Closed Source High-Risk Project?

    - by abel
    I am currently planning to develop a J2EE website and wish to bring in 1 developer and 1 web designer to assist me. The project is a financial app with a niche market. I plan to keep the source closed . However, I fear that my would-be employees could easily copy the codebase and use it /sell it to a third party especially when they switch jobs. The app development will take 4-6months and perhaps more and I may have to bring in people after the app goes live. How do I keep the source to myself. Are there techniques companies use to guard their source. I foresee disabling pendrives and dvd writers on my development machines, but uploading data or attaching the code in one's mail would still be possible. My question is incomplete. But programmers who have been in my situation, please advice. How should I go about this? Building a team, maintaining code-secrecy,etc. I am looking forward to sign a secrecy contract with the employees if needed too. (Please add relevant tags) Update Thank you for all the answers. I certainly won't be disabling all USB ports and DVD writers now. But I think I should be logging activity(How exactly should I do that?) I am wary of scalpers who would join and then run off with the existing code. I haven't met any, but I have been advised to be wary of them. I would include a secrecy clause, but given this is a startup with almost no funding and in a highly competitive business niche with bigger players in the field, I doubt I would be able to detect or pursue any scalpers. How do I hire people I trust, when I don't know them personally. Their resume will be helpful but otherwise trust will develop only with time. But finally even if they do run away with the code, it is service that matters after the sale is made. So I am not really worried for the long term.

    Read the article

  • Making A Photo-Sharing App For Android In Eclipse [on hold]

    - by user3694394
    I've only just started developing mobile apps, which is something that I've been wanting to learn for a while now. I'm from an indie games studio, making PC games for around the last 3 years, and I finally decided to move into android app development. The only problem I'm having is that I don't know where to start. The project which I'm aiming to create will be something similar to Instagram, basically a photo-sharing app which allows users to take new photos, or pull them from their device, and add filters to them, before posting them. I have a rough idea of how I could go about doing this, but I need pointing towards any tutorials available for each specific step. So, here's my idea: Create a UI in eclipse (this wont be a problem for me, I should be able to do this fine through xml files) Setup a server-side database to store all user info and uploaded images (the images will need to be converted into byte arrays, and I have no idea how to do this through a database). My best idea would be to use a MySQL database to do this. Add user interactions (likes, favourites, reposts, etc.). This would, again, have to be stored in the database (or, at least, i think it would). Add the ability to take new photos using the phone's camera (I can probably do this anyway, using the Camera API). Add the ability to pull existing photos from the device (again, pretty simple to do). Add the ability to add filters to any photos (I had a look around, and there are some repos and resources which allow you to do this, but they're mainly for iOS development). Add facebook/twitter integration (possibly) to allow phots to be shared to other social networks. Create a news feed which shows users all of the latest photos from their friends, and allows them to post their own images to it. Give all registered users their own wall/page which has their latest posts/images displayed on it. Add the ability to allow users to follow other users, and display their followed users posts on their news feed. Yep. It's not going to be easy, and I don't even know if it's possible for me to do alone in Eclipse. However, this is the plan, and I'm going to do my best to learn everything I need to know to do this successfully. My actual question would be how should I start doing this- where do I begin learning how to do all of this? I've had a look at snapify, which can be edited via Parse, but I won't be spending hundreds of dollars (since I'm 15 and just don't have the available funds) on software. I have extensive knowledge of Java (again, I've been making games for around 3 years, mainly in Java), and various scripting languages. So, hopefully, this will be of some use here. Thanks in advance, Josh.

    Read the article

  • What algorithms can I use to detect if articles or posts are duplicates?

    - by michael
    I'm trying to detect if an article or forum post is a duplicate entry within the database. I've given this some thought, coming to the conclusion that someone who duplicate content will do so using one of the three (in descending difficult to detect): simple copy paste the whole text copy and paste parts of text merging it with their own copy an article from an external site and masquerade as their own Prepping Text For Analysis Basically any anomalies; the goal is to make the text as "pure" as possible. For more accurate results, the text is "standardized" by: Stripping duplicate white spaces and trimming leading and trailing. Newlines are standardized to \n. HTML tags are removed. Using a RegEx called Daring Fireball URLs are stripped. I use BB code in my application so that goes to. (ä)ccented and foreign (besides Enlgish) are converted to their non foreign form. I store information about each article in (1) statistics table and in (2) keywords table. (1) Statistics Table The following statistics are stored about the textual content (much like this post) text length letter count word count sentence count average words per sentence automated readability index gunning fog score For European languages Coleman-Liau and Automated Readability Index should be used as they do not use syllable counting, so should produce a reasonably accurate score. (2) Keywords Table The keywords are generated by excluding a huge list of stop words (common words), e.g., 'the', 'a', 'of', 'to', etc, etc. Sample Data text_length, 3963 letter_count, 3052 word_count, 684 sentence_count, 33 word_per_sentence, 21 gunning_fog, 11.5 auto_read_index, 9.9 keyword 1, killed keyword 2, officers keyword 3, police It should be noted that once an article gets updated all of the above statistics are regenerated and could be completely different values. How could I use the above information to detect if an article that's being published for the first time, is already existing within the database? I'm aware anything I'll design will not be perfect, the biggest risk being (1) Content that is not a duplicate will be flagged as duplicate (2) The system allows the duplicate content through. So the algorithm should generate a risk assessment number from 0 being no duplicate risk 5 being possible duplicate and 10 being duplicate. Anything above 5 then there's a good possibility that the content is duplicate. In this case the content could be flagged and linked to the article's that are possible duplicates and a human could decide whether to delete or allow. As I said before I'm storing keywords for the whole article, however I wonder if I could do the same on paragraph basis; this would also mean further separating my data in the DB but it would also make it easier for detecting (2) in my initial post. I'm thinking weighted average between the statistics, but in what order and what would be the consequences...

    Read the article

  • PeopleSoft Grants & the Federal Agency Letter of Credit Draw Changes

    - by Mark Rosenberg
    For decades, most, if not all, US Federal agencies that sponsor research allowed grant recipients to request and receive payments using pooled accounts, commonly known as pooled letter of credit (LOC) draws. This enabled organizations, such as universities and hospitals, fast and efficient access to reimbursement of the expenditures they incurred conducting research across a portfolio of grants. To support this business practice, the PeopleSoft Grants solution has delivered an LOC Draw report to provide the total request amount along with all of the supporting invoice details for reconciliation and audit purposes. Now, in an attempt to provide greater transparency, eliminate fraud, strengthen accountability for grant-related financial transactions, and simplify grant award closeout, many US Federal sponsors are transitioning from the “pooling” letter of credit draw method to requesting on a “grant-by-grant” basis. The National Science Foundation, the second largest issuer of Federal awards, already transitioned to detailed grant draws in 2013. And, in response to the U.S. Department of Health and Human Services (HHS) directive to HHS-supported Agencies, the largest Federal awards sponsor, the National Institutes of Health (NIH), will fully transition to the new HHS subaccount draw method. This will require NIH award recipients to request payments based on actual expenses incurred on an award-by-award basis. NIH is expected to fully transition to this new draw method by the end of Federal fiscal year 2015.  (The NIH had planned to fully transition to this new method by the end of fiscal 2014; however, the impact to institutions was deemed to be significant enough that a reprieve was recently granted.) In light of these new Federal draw requirements, we have recently released these new features to aid our customers on both PeopleSoft Grants releases 9.1 and 9.2:1. Federal Award Identification Number on the Proposal and Award Profile 2. Letter of credit fields on contract lines to support award basis draws and comply with Federal close out mandates3. Process to produce both pro forma and final LOC Draw Reports in BI Publisher report format4. Subacccount ID field on the LOC Summary and a new BI Publisher version of the LOC Summary report 5. Added Subaccount Field and contract info to be displayed on the LOC summary page6. Ability to generate by a variety of dimensions pro forma and invoiced draw listings 7. Queries for generation and manipulation of data to upload into sponsor payment request systems and perform payment matching8. Contracts LOC Close Out query to quickly review final balances prior to initiating final draws and preparing Federal Financial Reports prior to close The PeopleSoft Development team actively monitors this and other major Federal changes and continues working closely with the Grants Product Advisory Group of the Higher Education User Group to ensure a clear understanding of what our customers need in order to transition to new approaches for doing business with the Federal government. For more information regarding the enhancements to the PeopleSoft Grants solution, existing customers can login to My Oracle Support and review the Enhancements to Letter of Credit Process (Doc ID 1912692.1) associated with resolution ID 904830. This enhanced LOC functionality is available in both PeopleSoft FSCM 9.1 Bundle #31 and PeopleSoft FSCM 9.2 Update Image 8.

    Read the article

  • Addressing threats introduced by the BYOD trend

    - by kyap
    With the growth of the mobile technology segment, enterprises are facing a new type of threats introduced by the BYOD (Bring Your Own Device) trend, where employees use their own devices (laptops, tablets or smartphones) not necessarily secured to access corporate network and information.In the past - actually even right now, enterprises used to provide laptops to their employees for their daily work, with specific operating systems including anti-virus and desktop management tools, in order to make sure that the pools of laptop allocated are spyware or trojan-horse free to access the internal network and sensitive information. But the BYOD reality is breaking this paradigm and open new security breaches for enterprises as most of the username/password based systems, especially the internal web applications, can be accessed by less or none protected device.To address this reality we can adopt 3 approaches:1. Coué's approach: Close your eyes and assume that your employees are mature enough to know what he/she should or should not do.2. Consensus approach: Provide a list of restricted and 'certified' devices to the internal network. 3. Military approach: Access internal systems with certified laptop ONLYIf you choose option 1: Thanks for visiting my blog and I hope you find the others entries more useful :)If you choose option 2: The proliferation of new hardware and software updates every quarter makes this approach very costly and difficult to maintain.If you choose option 3: You need to find a way to allow the access into your sensitive application from the corporate authorized machines only, managed by the IT administrators... but how? The challenge with option 3 is to find out how end-users can restrict access to certain sensitive applications only from authorized machines, or from another angle end-users can not access the sensitive applications if they are not using the authorized machine... So what if we find a way to store the applications credential secretly from the end-users, and then automatically submit them when the end-users access the application? With this model, end-users do not know the username/password to access the applications so even if the end-users use their own devices they will not able to login. Also, there's no need to reconfigure existing applications to adapt to the new authenticate scheme given that we are still leverage the same username/password authenticate model at the application level. To adopt this model, you can leverage Oracle Enterprise Single Sign On. In short, Oracle ESSO is a desktop based solution, capable to store credentials of Web and Native based applications. At the application startup and if it is configured as an esso-enabled application - check out my previous post on how to make Skype essso-enabled, Oracle ESSO takes over automatically the sign-in sequence with the store credential on behalf of the end-users. Combined with Oracle ESSO Provisioning Gateway, the credentials can be 'pushed' in advance from an actual provisioning server, like Oracle Identity Manager or Tivoli Identity Manager, so the end-users can login into sensitive application without even knowing the actual username and password, so they can not login with other machines rather than those secured by Oracle ESSO.Below is a graphical illustration of this approach:With this model, not only you can protect the access to sensitive applications only from authorized machine, you can also implement much stronger Password Policies in terms of Password Complexity as well as Password Reset Frequency but end-users will not need to remember the passwords anymore.If you are interested, do not hesitate to check out the Oracle Enterprise Single Sign-on products from OTN !

    Read the article

  • What if you could work on anything you wanted?

    - by Nick Harrison
    What if you could work on anything you wanted? Redgate is doing an experiment of sorts this week.  Called Down Tools Week.    The idea is that they stopped working on their regular projects for a week and strike out on something that catches their attention and drives their passion. Evidently in many cases, these projects have turned out to be new features in their existing products that individual were interested in, some were internal iniatives and some where evidently off the wall new ideas.   Today is show and tell where they will share with each other what they have been working on. There may well be some interesting announcements coming out of this.    The prospects are exciting. I understand that Google does something similar allowing their employees a specified amount of time to work on projects of their own choosing.    This has been the breeding ground for some of my favorite services. It is a shame that more companies do not follow such practices.   Now I know that most companies cannot afford to shut down everything for a week and sometimes you can't really explore an interesting idea in 8 hours a week or however much time Google allocates, but still it may be worth while. What would happen if your company gave you as an individual 1 week each quarter to work on a project of your own design and see what happens?   I would be happen if you still had to get approval for before your week long adventure. Personally, I think that this could be a very effective use of training budgets.   Give me a week to research something on my own and you would be amazed at what I can find out.    Maybe this should be the prerequisite before starting a new project.   Stagger the team onboarding but have everyone spend a week long sabbatical studying BizTalk before starting a project that will hinge on BizTalk. The show and tell afterwards is a great way to keep everyone honest or at least reassure management that everyone is honest.    If your goal was to spend a week researching and exploring a new technology and you had to do a show and tell afterwards to show off what you had learned, then everyone can learn a bit of what you just learned.     Sounds like a promising win win for me. Maybe it is a pipe dream, but what if .... What would you work on if given the opportunity to work on anything you wanted?

    Read the article

  • JSR 355 Final Release, and moves JCP to version 2.9

    - by heathervc
    JSR 355, JCP EC Merge, passed the JCP EC Final Approval Ballot on 13 August 2012, with 14 Yes votes, 1 abstain (1 member did not vote) on the SE/EE EC, and 12 yes votes (2 members were not eligible to vote) on the ME EC.  JSR 355 posted a Final Release this week, moving the JCP program version to JCP 2.9.  The transition to a merged EC will happen after the 2012 EC Elections, as defined in the Appendix B of the JCP (pasted below), and the EC will operate under the new EC Standing Rules. In the previous version (2.8) of this Process Document there were two separate Executive Committees, one for Java ME and one for Java SE and Java EE combined. The single Executive Committee described in this version of the Process Document will be implemented through the following process: The 2012 annual elections will be held as defined in JCP 2.8, but candidates will be informed that if they are elected their term will be for only a single year, since all candidates must stand for re-election in 2013. Immediately after the 2012 election the two ECs will be merged. Oracle and IBM's second seats will be eliminated, resulting in a single EC with 30 members. All subsequent JSR ballots (even for in-progress JSRs) will then be voted on by the merged EC. For the 2013 annual elections three Ratified and two Elected Seats will be eliminated, thereby reducing the EC to 25 members. All 25 seats will be up for re-election in 2013. Members elected in 2013 will be ranked to determine whether their initial term will be one or two years. The 50% of Ratified and 50% of Elected members who receive the most votes will serve an initial two-year term, while all others will serve an initial one year term. All members elected in 2014 and subsequently will serve a two-year term. For clarity, note that the provisions specified in this version of the Process Document regarding a merged EC will apply to subsequent ballots on all existing JSRs, whether or not the Spec Leads of those JSRs chose to adopt this version of the Process Document in its entirety. <end of Appendix> Also of note:  the materials and minutes from the July EC meeting and the June EC Meeting are now available--following the July EC Meeting, Samsung and SK Telecom lost their EC seats. The June EC meeting also had a public portion--the audio from the public portion of the EC meeting are now posted online.  For Spec Leads there is also the recording of the EG Nominations call.

    Read the article

  • SEO to ensure visibility for a narrow, non-competitive, non-commercial site

    - by hen3ry
    I'm webmaster of a non-commercial site in English. A non-native-English speaker asked me why our site doesn't produce hits in Google searches she conducts for relevant keywords in her native language. I asked her for a list of keywords in her native language, and I naively tried inserting those into the META info in the page headers and waited a couple of weeks. No help. A little searching informed me that Google doesn't use the META info, and has not done so for a very long time. D'oh! To be entirely concrete, suppose the StackExchange folks want Russian speakers to find this site, Pro Webmasters. The direct translation in Russian of "webmaster" --thanks, Google Translator-- is: "?????????". (Not sure this will render properly, but that's not essential to my question.) Assuming Pro Webmasters has a common template for all pages it generates, inserting "?????????" into the Keywords META for that template won't help, it seems. What could StackExchange do to make this site visible to users searching with the Russian keyword "?????????" ? Pretty much all the advice I've seen boils down to this, if I understand correctly: use the desired search term often (but not too often) among site content, and the problem will be solved. That's great, but I don't think sprinkling "?????????" visibly all over Pro Webmasters is going to fly. Just for completeness, crawlers must be long immune to the invisible-to-visitors scheme, e.g, format "?????????" in a tiny text size in a color the same as an existing background, e.g. white-over-white. Or, put that text inside a div styled: ' style="visibility: hidden" '. Probably some other equivalents. I can only think of one slightly effective method, along these lines: place an unobtrusive link on the common template to a page titled "for international users" , and on that page list desired synonyms for "webmaster" in various languages on that page. A test case --admittedly, just one-- using my site implies that a Google search for "international users" ????????? will produce a hit for this page, and thus make the site minimally visible, despite the fact that the page will almost never be visited. At the moment, anyway. Note: All the SEO discussions I have found so far are about competitive and --almost certainly-- commercial sites. To repeat: my site is non-commercial, and it is about an obscure, narrow topic that is of interest to only a small number of people worldwide. This isn't about clawing our way to the top of competitive rankings, just making this content minimally visible to interested non-native-English speakers. Ideas? TIA

    Read the article

  • Small hiccup with VMware Player after upgrading to Ubuntu 12.04

    The upgrade process Finally, it was time to upgrade to a new LTS version of Ubuntu - 12.04 aka Precise Pangolin. I scheduled the weekend for this task and despite the nickname of Mauritius (Cyber Island) it took roughly 6 hours to download nearly 2.400 packages. No problem in general, as I have spare machines to work on, and it was weekend anyway. All went very smooth and only a few packages required manual attention due to local modifications in the configuration. With the new kernel 3.2.0-24 it was necessary to reboot the system and compared to the last upgrade, I got my graphical login as expected. Compilation of VMware Player 4.x fails A quick test on the installed applications, Firefox, Thunderbird, Chromium, Skype, CrossOver, etc. reveils that everything is fine in general. Firing up VMware Player displays the known kernel mod dialog that requires to compile the modules for the newly booted kernel. Usually, this isn't a big issue but this time I was confronted with the situation that vmnet didn't compile as expected ("Failed to compile module vmnet"). Luckily, this issue is already well-known, even though with "Failed to compile module vmmon" as general reason but nevertheless it was very easy and quick to find the solution to this problem. In VMware Communities there are several forum threads related to this topic and VMware provides the necessary patch file for Workstation 8.0.2 and Player 4.0.2. In case that you are still on Workstation 7.x or Player 3.x there is another patch file available. After download extract the file like so: tar -xzvf vmware802fixlinux320.tar.gz and run the patch script as super-user: sudo ./patch-modules_3.2.0.sh This will alter the existing installation and source files of VMware Player on your machine. As last step, which isn't described in many other resources, you have to restart the vmware service, or for the heart-fainted, just reboot your system: sudo service vmware restart This will load the newly created kernel modules into your userspace, and after that VMware Player will start as usual. Summary Upgrading any derivate of Ubuntu, in my case Xubuntu, is quick and easy done but it might hold some surprises from time to time. Nonetheless, it is absolutely worthy to go for it. Currently, this patch for VMware is the only obstacle I had to face so far and my system feels and looks better than before. Happy upgrade! Resources I used the following links based on Google search results: http://communities.vmware.com/message/1902218#1902218http://weltall.heliohost.org/wordpress/2012/01/26/vmware-workstation-8-0-2-player-4-0-2-fix-for-linux-kernel-3-2-and-3-3/ Update on VMware Player 4.0.3 Please continue to read on my follow-up article in case that you upgraded either VMware Workstation 8.0.3 or VMware Player 4.0.3.

    Read the article

  • Legitimate use of the Windows "Documents" folder in programs.

    - by romkyns
    Anyone who likes their Documents folder to contain only things they place there knows that the standard Documents folder is completely unsuitable for this task. Every program seems to want to put its settings, data, or something equally irrelevant into the Documents folder, despite the fact that there are folders specifically for this job1. So that this doesn't sound empty, take my personal "Documents" folder as an example. I don't ever use it, in that I never, under any circumstances, save anything into this folder myself. And yet, it contains 46 folders and 3 files at the top level, for a total of 800 files in 500 folders. That's 190 MB of "documents" I didn't create. Obviously any actual documents would immediately get lost in this mess. My question is: can anything be done to improve the situation sufficiently to make "Documents" useful again, say over the next 5 years? Can programmers be somehow educated en-masse not to use it as a dumping ground? Could the OS start reporting some "fake" location hidden under AppData through the existing APIs, while only allowing Explorer and the various Open/Save dialogs to know where the "real" Documents folder resides? Or are any attempts completely futile or even unnecessary? 1For the record, here's a quick summary of the various standard directories that should be used instead of "Documents": RoamingAppData for user-specific data and settings. This is the directory to use for user-specific non-temporary data. Anything placed here will be available on any machine that a given user logs on to in networks where this is configured. Do not place large files here though, because they slow down login/logout in such environments. LocalAppData for user-and-machine-specific data and settings. This data differs for every user and every machine. This is also where very large user-specific data should be placed. ProgramData for machine-specific data and settings. These are the same regardless of which user is logged on, and will not roam to other machines in a network. GetTempPath for all files that may be wiped without loss of data when not in use. This is also the place for things like caches, because like temporary data, a cache does not need to be backed up. Place your huge cache here and you'll save your user some backup trouble. "Documents" itself should only ever be used if the user specified it manually by entering a path or selecting it in a Save dialog. That is the only time it is ever appropriate to save stuff in "Documents".

    Read the article

  • How do I know if I am using Scrum methodologies?

    - by Jake
    When I first started at my current job, my purpose was to rewrite a massive excel-VBA workbook-application to C# Winforms because it was thought that the new C# app will fix all existing problems and have all the new features for a perfect world. If it were a direct port, in theory it would be easy as i just need to go through all the formulas, conditional formatting, validations, VBA etc. to understand it. However, that was not the case. Many of the new features are tightly dependant on business logic which I am unfamiliar with. As a solo programmer, the first year was spent solely on deciphering the excel workbook and writing the C# app. In theory, I had the business people to "help" me specify requirements, how GUI looks and work, and testing of the app etc; but in practice it is like a contant tsunami of feature creep. At the beginning of the second year I managed to convince the management that this is not going anywhere. I made them start from scratch with the excel-VBA. I have this "issue log" saved on the network, each time they found something they didn't like about the excel-VBA app, they will write it in there. I check the log daily and consolidate issues (in my mind) mainly into 2 groups: (1) requires massive change. (2) can be fixed in current version. For massive change issues, I make a copy of the latest excel-VBA and give it a new version number, then work on it whenever I can. For current version fixes, I make the changes in a few days to a week, and then immediately release it. I also ensure I update the same change in any in-progress massive change future versions. This has gone on for about 4 months and I feel it works great. I made many releases and solved many real issues, also understood the business logic more and more. However, my boss (non-IT trained) thinks what I am doing are just adhoc changes and that i am not looking at the "bigger picture". I am struggling to convince my boss that this works. So I hope to formalise my approach and maybe borrow a buzzword to confuse him. Incidentally, I read about Agile and SCRUM, about backlog and sprints. But it's all very vague to me still. QUESTION (finally): I want to tell him that this is SCRUM! But I want to hold my breath first and ask whether my current approach is considered SCRUM or SCRUM-like? How can I make it more SCRUM-like? Note that I have only myself, there's no project leader or teams.

    Read the article

  • Why do "Joke" programming languages exist? [closed]

    - by ThePlan
    First of all please be aware this post contains some abusive language but I hope it will not bother anyone. I apologize for the bad language but that's what the name is. As I've been doing documentation on existing programming languages attempting to make a complete list of them I stumbled across terrible programming languages, which were clearly not made for actual use and implementation due to their insane difficulty. Languages such as Brainfu*k and LOLCODE or Whitespace are fool languages because they have no real use. For example, a "Hello world" program written in BrainFu*k. Taken from Wikipedia: The following program prints "Hello World!" and a newline to the screen: +++++ +++++ initialize counter (cell #0) to 10 [ use loop to set the next four cells to 70/100/30/10 > +++++ ++ add 7 to cell #1 > +++++ +++++ add 10 to cell #2 > +++ add 3 to cell #3 > + add 1 to cell #4 <<<< - decrement counter (cell #0) ] > ++ . print 'H' > + . print 'e' +++++ ++ . print 'l' . print 'l' +++ . print 'o' > ++ . print ' ' << +++++ +++++ +++++ . print 'W' > . print 'o' +++ . print 'r' ----- - . print 'l' ----- --- . print 'd' > + . print '!' > . print '\n' or another example taken from LOLCODE language: HAI CAN HAS STDIO? PLZ OPEN FILE "LOLCATS.TXT"? AWSUM THX VISIBLE FILE O NOES INVISIBLE "ERROR!" KTHXBYE These languages are very difficult to learn/read/work with. My question is - Why do they exist? What is the purpose of them? Also, is there an official "name" for these type of languages?

    Read the article

  • [EF + Oracle] Intro

    - by JTorrecilla
    Prologue I have a busy personal and working time, and at this moment that I start to get more free time, I decided to start a Serie about Entity Framework with Oracle. A few time ago, I got my first experience with EF and Oracle with Oracle 10 g express and Oracle 10 g with the same results, Doesn’t work. Now I download Oracle 11 g to Test again. Tools To start using EF with Oracle we need the following: 1. Visual Studio 2010. No Express Edition 2. Oracle 11g 3 Oracle Driver for EF (ODAC) Intro People, who are starting with EF developments, I recommend to take a look into Unai Zorrilla’s Blog, the post were written in Spanish but they are great! To this Serie, we are going to define the DB from the Oracle administrator. For that we need to follow the next steps: 1. Create a User with a PassWord. In my example the user will be Jtorrecilla 2. Create a TableSpace 3. Define some example tables   (Image1) When we have created the DB, we are going to start a new project in VS 2010. I will start a C# Project. To start with EF, we need to add a new objet to our Project “ADO .NET Entity Data Model". (Image2) The next step will be to indicate that our model will be based on an existing DB, and indicate the connection string (Images 3 and 4): (Imagen3) (Imagen4) Once we selected the connection string, we will need to indicate that in the connection will be saved “Sensitive” data (Image 5), and in the next step we are going to select the DB objets to use in the project(Image 6).   (Image 5) (Image 6) A the end of the selection, we will press Finish button, and it will generate a EDMX file to add to our solution, and in the IDE will appear the DB Schema with the selected Tables and Relations. (Imagen7) One Entity is composed by a set of properties (each matches with a column from the Table in the DB) and Navigation Properties that represents any relation with other Entities.   Finally With this chapter we have installed the environment, defined a DB and configured the solution to start using EF with Oracle. In the next chapter we are going to see What is a Entity and how it works. I hope you enjoy this Serie!

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-05

    - by Bob Rhubart
    Why is enterprise software often so complicated? | Rajesh Raheja rraheja.wordpress.com Rajesh Raheja shares "a few examples of requirements that lead to creation of complex platform infrastructures that up the complex enterprise software." Educause Top-Ten IT Issues - the most change in a decade or more | Cole Clark blogs.oracle.com Cole Clark discusses why "higher education IT must change in order to fully realize the potential for transforming the institution, and therefore it's people must learn new skills, understand and accept new ways of solving problems, and not be tied down by past practices or institutional inertia." Oracle VM RAC template - what it took | Wim Coekaerts blogs.oracle.com Wim Coekaerts shares an example that shows how easy it is to deploy a complete Oracle RAC cluster with Oracle VM. Oracle Cloud and Oracle Platinum Services Announcements oracle.com Featuring Larry Ellison and Mark Hurd. Wednesday, June 06, 2012. 1:00 p.m. PT – 2:30 p.m. PT Creating an Oracle Endeca Information Discovery 2.3 Application Part 1 : Scoping and Design | Mark Rittman www.rittmanmead.com Oracle ACE Director Mark Rittman launches a new series that dives into "the various stages in building a simple Oracle Endeca Information Discovery application, using the recent Endeca Information Discovery 2.3 release." Introducing Decision Tables in the SOA Suite 11g | Lucas Jellama technology.amis.nl Oracle ACE Director Lucas Jellema demonstrates how "the decision table can be put to good use to implement the business logic behind the classical game of Rock, Paper and Scissors." Application integration: reorganise, recycle, repurpose | Andrew Clarke radiofreetooting.blogspot.com "Integration is a topic which is in everybody's baliwick," says Oracle ACE Andrew Clarke. "The business people want to get the best value from their existing IT investments. The architects need to understand the interfaces between the silos and across the layers. The developers have to implement it." Using XA Transactions in Coherence-based Applications | Jonathan Purdy blogs.oracle.com Purdy shares "a few common approaches when integrating Coherence into applications via the use of an application server's transaction manager." Thought for the Day "The difficulty lies, not in the new ideas, but in escaping from the old ones..." — John Maynard Keynes (June 5, 1883 - April 4, 1946) Source: Quotations Page

    Read the article

  • Exchange 2010, Exchange 2003 Mail Flow issue

    - by Ryan Roussel
    While performing the initial Exchange 2010 deployment for a customer migrating from Exchange 2003, I ran into an issue with mail flow between the two environments.  The Exchange 2003 mailboxes could send to Exchange 2010, as well as to and from the internet.  Exchange 2010 mailboxes could send and receive to the internet, however they could not send to Exchange 2003 mailboxes.   After scouring the internet for a solution, it seemed quite a few people were experiencing this issue with no resolution to be found, or at least not easily.  After many attempts of manually deleting and recreating the routing group connectors,  I finally lucked onto the answer in an obscure comment left to another blogger.   If inheritable permissions are not allowed on the Exchange 2003 object in the Active Directory schema, exchange server authentication cannot be achieved between the servers.   It seems when Blackberry Enterprise Server gets added to 2003 environments, a lot of Admins get tricky and add the BES Admin user explicitly to the server object  to allow  inheritance down from there to all mailboxes.  The problem is they also coincidently turn off inheritance to the server object itself from its parent containers.  You can re-establish inheritance without overwriting the existing ACL however so that the BES Admin can remain in the server object ACL.   By re-establishing inheritance to the 2003 server object, mail flow was instantly restored between the servers.    To re-establish inheritance: 1. Open ASDIedit by adding the snap-in to a MMC (should be included on your 2008 server where Exchange 2010 is installed) 2. Navigate to Configuration > Services > Microsoft Exchange > Exchange Organization > Administrative Groups > First Administrative Group > Servers 3. In the right pane, right click on the CN=Server Name of your Exchange 2003 Server, select properties 4. Navigate to the Security tab, hit advanced toward the bottom. 5. Check the checkbox that reads “include inheritable permissions” toward the bottom of the dialogue box.

    Read the article

  • Update in Certification Exam Score Report Access Process!

    - by Richard Lefebvre
    Please note that exam results for all Oracle Certification exams will be accessed through CertView, starting October 30th, 2012. Exam results will no longer be available at the test center, or on the Pearson VUE website. Candidates will receive an email from Oracle within 30 minutes of completing the exam to let them know that their exam results are available on CertView. Candidates must have an Oracle Web Account to access CertView. This new process applies to exam results for all Oracle Certification exams - proctored and non-proctored as well beta exams. CertView, Oracle's self-service certification portal will be the partners’ one stop source for all their certification and exam history! Other benefits of this change include: driving all candidates to have an Oracle Web Account which will lead to tighter integration with Oracle University records in the future, increased security around data privacy and a higher validity rate for candidate email addresses. Existing benefits of CertView include, self-service access to exam and certification records and logos, and access to Oracle's self service certification verification. Accessing Exam Results  Returning CertView Users ·         Click the link in the email sent by Oracle or go to certview.oracle.com ·         Select the See My New Exam Results Now link to view exam results ·         Select the Print My New Exam Results Now link to print exam results  New CertView Users - Who Have An Oracle Web Account ·         First time Users must authenticate their CertView account ·         Account Authentication requires the Oracle Testing ID and email address from your Pearson VUE profile ·         Click the link in the email sent by Oracle or go to certview.oracle.com and follow the Authenticate My CertView Account link.  New CertView Users - Who Do Not Have An Oracle Web Account ·         CertView users are required to have an Oracle Web Account ·         To create an Oracle Web Account, go to certview.oracle.com and select theCreate My Oracle Web Account Now link. Then follow the remaining instructions under I do not have an Oracle Web Account on that page.

    Read the article

  • BIDS Helper 1.6 Beta Release (now with SQL 2012 support!)

    - by Darren Gosbell
    The beta for BIDS Helper 1.6 was just released. We have not updated the version notification just yet as we would like to get some feedback on people's experiences with the SQL 2012 version. So if you are using SQL 2012, go grab it and let us know how you go (you can post a comment on this blog post or on the BIDS Helper site itself). This is the first release that supports SQL 2012 and consequently also the first release that runs in Visual Studio 2010. A big thanks to Greg Galloway for doing the bulk of the work on this release. Please note that if you are doing an xcopy deploy that you will need to unblock the files you download or you will get a cryptic error message. This appears to be caused by a security update to either Visual Studio or the .Net framework – the xcopy deploy instructions have been updated to show you how to do this. Below are the notes from the release page. ====== This beta release is the first to support SQL Server 2012 (in addition to SQL Server 2005, 2008, and 2008 R2). Since it is marked as a beta release, we are looking for bug reports in the next few months as you use BIDS Helper on real projects. In addition to getting all existing BIDS Helper functionality working appropriately in SQL Server 2012 (SSDT), the following features are new... Analysis Services Tabular Smart Diff Tabular Actions Editor Tabular HideMemberIf Tabular Pre-Build Fixes and Updates The Unused Datasets feature for Reporting Services now accounts for new features in Reporting Services 2008 R2 like Lookups and new features in Reporting Services 2012. SSIS: emit an informational message when a variable has an expression defined and EvaluateAsExpression = False SSAS: roles reports points to wrong server SSIS - Variable Copy / Move broken in v1.5 "Unused DataSets Report" not showing up in Context menu on VS2005 if Solution Folders used SSAS Tabular: Create a UI for managing actions SSAS Tabular: Smart Diff improvements for new schema and Tabular models SSIS: Copy/Move Variable Erroring due to custom Control Flow item Icon SSIS Performance Visualization Index out of range fixing bugs in AggManager when aggregation design IDs don't match names The exe downloads are a self extracting installer, the zip downloads allow for an xcopy deploy. Make sure to note the updated xcopy deploy instructions for SQL Server 2012.

    Read the article

  • Announcing the Next AppAdvantage IT Leader Documentary

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Close on the heels of the very successful launch of Oracle BPM 12c, we are so excited to be announcing the next video in the Oracle AppAdvantage IT Leaders Series. As you know, the Oracle AppAdvantage IT Leaders Series profiles a successful executive in an industry leading organization that has embarked on a business transformation or platform modernization journey by taking full advantage of the Oracle AppAdvantage pace layered architecture. Tune in on Tuesday, September 9th at 10 am Pacific/1 pm Eastern to catch our IT executive, Regis Louis in an in-depth conversation with IT executives from Siram S.p.a., a major energy services management company in Italy. The documentary will explore how the company is doing process orchestration and business process management to build inter-department collaboration, maintain data integrity and offer complete transparency throughout their Request for Proposal (RFP) process. Oracle’s technology expert will then do a comprehensive walk-through of Siram’s IT model, the various components and how Oracle Fusion Middleware technologies are enabling their Oracle E-Business Suite processes. Experts will be at hand to answer your questions live. Check out this live documentary webcast and find out how organizations like yours are building business agility leveraging existing investments in Oracle Applications. Register today. And if you are on twitter, send your questions to @OracleMiddle with #ITLeader, #AppAdvantage. We look forward to connecting with you on Tue, Sep 9. Siram Achieves Commercial Efficiency with Improved Business Process Agility Date: Tuesday, September 9, 2014 Time: 10:00 AM PT / 1:00 PM ET /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • How do I set image position in conky

    - by realitygenerator
    I copied and modified an existing .conkyrc file from the ubuntu forum and I'm trying to place the LinuxMint logo in a specific position Below are my conkyrc file and the screenshot # UBUNTU-CONKY # A comprehensive conky script, configured for use on # Ubuntu / Debian Gnome, without the need for any external scripts. # # Based on conky-jc and the default .conkyrc. # INCLUDES: # - tail of /var/log/messages # - netstat shows number of connections from your computer and application/PID making it. Kill spyware! # # -- Pengo # # Create own window instead of using desktop (required in nautilus) own_window yes own_window_type desktop own_window_transparent yes own_window_hints undecorated,below,sticky,skip_taskbar,skip_pager # Use double buffering (reduces flicker, may not work for everyone) double_buffer yes # fiddle with window use_spacer right # Use Xft? use_xft yes xftfont URW Gothic:size=8 xftalpha 0.8 text_buffer_size 2048 # Update interval in seconds update_interval 3.0 # Minimum size of text area # minimum_size 250 5 # Draw shades? draw_shades no # Text stuff draw_outline no # amplifies text if yes draw_borders no uppercase no # set to yes if you want all text to be in uppercase # Stippled borders? stippled_borders 3 # border margins border_margin 9 # border width border_width 10 # Default colors and also border colors, grey90 == #e5e5e5 default_color grey own_window_colour brown own_window_transparent yes # Text alignment, other possible values are commented #alignment top_left #alignment top_right #alignment bottom_left #alignment bottom_right. alignment top_middle # Gap between borders of screen and text gap_x 10 gap_y 10 #Display temp in fahrenheit temperature_unit fahrenheit #Choose which screen on which to display # stuff after 'TEXT' will be formatted on screen TEXT $color ${color green}SYSTEM ${hr 2}$color $nodename $sysname $kernel on $machine LinuxMint 11 "Katya" (Oneric) ${image ~/Conky/Logo_Linux_Mint.png -s 80x60 -f 86400} ${color green}CPU ${hr 2}$color ${freq}MHz Load: ${loadavg} Temp: ${hwmon temp 1} $cpubar ${cpugraph 000000 ffffff} NAME PID CPU% MEM% ${top name 1} ${top pid 1} ${top cpu 1} ${top mem 1} ${top name 2} ${top pid 2} ${top cpu 2} ${top mem 2} ${top name 3} ${top pid 3} ${top cpu 3} ${top mem 3} ${top name 4} ${top pid 4} ${top cpu 4} ${top mem 4} ${color green}MEMORY / DISK ${hr 2}$color RAM: $memperc% ${membar 6}$color Swap: $swapperc% ${swapbar 6}$color Root: ${fs_free_perc /}% ${fs_bar 6 /}$color hda1: ${fs_free_perc /media/sda1}% ${fs_bar 6 /media/sda1}$color ${color green}NETWORK (${addr eth1}) ${hr 2}$color Down: $color${downspeed eth1} k/s ${alignr}Up: ${upspeed eth1} k/s ${downspeedgraph eth1 25,140 000000 ff0000} ${alignr}${upspeedgraph eth1 25,140 000000 00ff00}$color Total: ${totaldown eth1} ${alignr}Total: ${totalup eth1} ${execi 30 netstat -ept | grep ESTAB | awk '{print $9}' | cut -d: -f1 | sort | uniq -c | sort -nr} ${color green}LOGGING ${hr 2}$color ${execi 30 tail -n3 /var/log/messages | awk '{print " ",$5,$6,$7,$8,$9,$10}' | fold -w50} ${color green}FORTUNE ${hr 2}$color ${execi 120 fortune -s | fold -w50} I want to put the mint logo right after the word (oneric). Any help would be greatly appreciated.

    Read the article

  • Help identify the pattern for reacting on updates

    - by Mike
    There's an entity that gets updated from external sources. Update events are at random intervals. And the entity has to be processed once updated. Multiple updates may be multiplexed. In other words there's a need for the most current state of entity to be processed. There's a point of no-return during processing where the current state (and the state is consistent i.e. no partial update is made) of entity is saved somewhere else and processing goes on independently of any arriving updates. Every consequent set of updates has to trigger processing i.e. system should not forget about updates. And for each entity there should be no more than one running processing (before the point of no-return) i.e. the entity state should not be processed more than once. So what I'm looking for is a pattern to cancel current processing before the point of no return or abandon processing results if an update arrives. The main challenge is to minimize race conditions and maintain integrity. The entity sits mainly in database with some files on disk. And the system is in .NET with web-services and message queues. What comes to my mind is a database queue-like table. An arriving update inserts row in that table and the processing is launched. The processing gathers necessary data before the point of no-return and once it reaches this barrier it looks into the queue table and checks whether there're more recent updates for the entity. If there are new updates the processing simply shuts down and its data is discarded. Otherwise the processing data is persisted and it goes beyond the point of no-return. Though it looks like a solution to me it is not quite elegant and I believe this scenario may be supported by some sort of middleware. If I would use message queues for this then there's a need to access the queue API in the point of no-return to check for the existence of new messages. And this approach also lacks elegance. Is there a name for this pattern and an existing solution?

    Read the article

  • Cloud Infrastructure has a new standard

    - by macoracle
    I have been working for more than two years now in the DMTF working group tasked with creating a Cloud Management standard. That work has culminated in the release today of the Cloud Infrastructure Management Interface (CIMI) version 1.0 by the DMTF. CIMI is a single interface that a cloud consumer can use to manage their cloud infrastructure in multiple clouds. As CIMI is adopted by the cloud vendors, no more will you need to adapt client code to each of the proprietary interfaces from these multiple vendors. Unlike a de facto standard where typically one vendor has change control over the interface, and everyone else has to reverse engineer the inner workings of it, CIMI is a de jure standard that is under change control of a standards body. One reason the standard took two years to create is that we factored in use cases, requirements and contributed APIs from multiple vendors. These vendors have products shipping today and as a result CIMI has a strong foundation in real world experience. What does CIMI allow? CIMI is both a model for the resources (computing, storage networking) in the cloud as well as a RESTful protocol binding to HTTP. This means that to create a Machine (guest VM) for example, the client creates a “document” that represents the Machine resource and sends it to the server using HTTP. CIMI allows the resources to be encoded in either JavaScript Object Notation (JSON) or the eXentsible Markup Language (XML). CIMI provides a model for the resources that can be mapped to any existing cloud infrastructure offering on the market. There are some features in CIMI that may not be supported by every cloud, but CIMI also supports the discovery of which features are implemented. This means that you can still have a client that works across multiple clouds and is able to take full advantage of the features in each of them. Isn’t it too early for a standard? A key feature of a successful standard is that it allows for compatible extensions to occur within the core framework of the interface itself. CIMI’s feature discovery (through metadata) is used to convey to the client that additional features that may be vendor specific have been implemented. As multiple vendors implement such features, they become candidates to add the future versions of CIMI. Thus innovation can continue in the cloud space without being slowed down by a lowest common denominator type of specification. Since CIMI was developed in the open by dozens of stakeholders who are already implementing infrastructure clouds, I expect to CIMI being adopted by these same companies and others over the next year or two. Cloud Customers who can see the benefit of this standard should start to ask their cloud vendors to show a CIMI implementation in their roadmap.  For more information on CIMI and the DMTF's other cloud efforts, go to: http://dmtf.org/cloud

    Read the article

  • Use Expressions with LINQ to Entities

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] Recently I've been putting together a generic approach for paging the response from a WCF service. Paging changes the service signature, so it's not as simple as adding a behavior to an existing service in config, but the complexity of the paging is isolated in a generic base class. We're using the Entity Framework talking to SQL Server, so when we ask for a page using LINQ's .Take() method we get a nice efficient SQL query for just the rows we want, with minimal impact on SQL Server and network traffic. We use the maximum ID of the record returned as a high-water mark (rather than using .Skip() to go to the next record), so the approach caters for records being deleted between page requests. In the paged response we include a HasMorePages indicator, computed by comparing the max ID in the page of results to the max ID for the whole resultset - if the latter is bigger, then there are more pages. In some quick performance testing, the paged version of the service performed much more slowly than the unpaged version, which was unexpected. We narrowed it down to the code which gets the max ID for the full resultset - instead of building an efficient MAX() SQL query, EF was returning the whole resultset and then computing the max ID in the service layer. It's easy to reproduce - take this AdventureWorks query:             var context = new AdventureWorksEntities();             var query = from od in context.SalesOrderDetail                         where od.ModifiedDate >= modified                          && od.SalesOrderDetailID.CompareTo(id) > 0                         orderby od.SalesOrderDetailID                         select od;   We can find the maximum SalesOrderDetailID like this:             var maxIdEfficiently = query.Max(od => od.SalesOrderDetailID);   which produces our efficient MAX() SQL query. If we're doing this generically and we already have the ID function in a Func:             Func<SalesOrderDetail, int> idFunc = od => od.SalesOrderDetailID;             var maxIdInefficiently = query.Max(idFunc);   This fetches all the results from the query and then runs the Max() function in code. If you look at the difference in Reflector, the first call passes an Expression to the Max(), while the second call passes a Func. So it's an easy fix - wrap the Func in an Expression:             Expression<Func<SalesOrderDetail, int>> idExpression = od => od.SalesOrderDetailID;             var maxIdEfficientlyAgain = query.Max(idExpression);   - and we're back to running an efficient MAX() statement. Evidently the EF provider can dissect an Expression and build its equivalent in SQL, but it can't do that with Funcs.

    Read the article

  • How do I use IIS7 rewrite to redirect requests for (HTTP or HTTPS):// (www or no-www) .domainaliases.ext to HTTPS://maindomain.ext

    - by costax
    I have multiple domain names assigned to the same site and I want all possible access combinations redirected to one domain. In other words whether the visitor uses http://domainalias.ext or http://www.domainalias.ext or https://www.domainalias3.ext or https://domainalias4.ext or any other combination, including http://maindomain.ext, http://www.maindomain.ext, and https://www.maindomain.ext they are all redirected to https://maindomain.ext I currently use the following code to partially achieve my objectives: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <rewrite> <rules> <rule name="CanonicalHostNameRule" stopProcessing="true"> <match url="(.*)" /> <conditions> <add input="{HTTP_HOST}" pattern="^MAINDOMAIN\.EXT$" negate="true" /> </conditions> <action type="Redirect" redirectType="Permanent" url="https://MAINDOMAIN.EXT/{R:1}" /> </rule> <rule name="HTTP2HTTPS" stopProcessing="true"> <match url="(.*)" /> <conditions> <add input="{HTTPS}" pattern="off" ignoreCase="true" /> </conditions> <action type="Redirect" redirectType="Permanent" url="https://MAINDOMAIN.EXT/{R:1}" /> </rule> </rules> </rewrite> </system.webServer> </configuration> ...but it fails to work in all instances. It does not redirect to https://maindomain.ext when user inputs https://(www.)domainalias.ext So my question is, are there any programmers here familiar with IIS7 ReWrite that can help me modify my existing code to cover all possibilities and reroute all my domain aliases, loaded by themselves or using www in front, in HTTP or HTTPS mode, to my main domain in HTTPS format??? The logic would be: if entire URL does NOT start with https://maindomain.ext then REDIRECT to https://maindomain.ext/(plus_whatever_else_that_followed). Thank you very much for your attention and any help would be appreciated. NOTE TO MODS: If my question is not in the correct format, please edit or advise. Thanks in advance.

    Read the article

  • PL/SQL to delete invalid data from token Strings

    - by Jie Chen
    Previous article describes how to delete the duplicated values from token string in bulk mode. This one extends it and shows the way to delete invalid data. Scenario Support we have page_two and manufacturers tables in database and the table DDL is: SQL> desc page_two; Name NULL? TYPE ----------------------------------------- -------- ------------------------ MULTILIST04 VARCHAR2(765) SQL> SQL> desc manufacturers; Name NULL? TYPE ----------------------------------------- -------- ------ ID NOT NULL NUMBER NAME VARCHAR In table page_two, column multilist04 stores a token string splitted with common. Each token represent a valid ID in manufacturers table. My expectation is to delete invalid token strings from page_two.multilist04, which have no mapping id in manufacturers.id. For example in below SQL result: ,6295728,33,6295729,6295730,6295731,22, , value 33 and 22 are invalid data because there is no ID equals to 33 or 22 in manufacturers table. So I need to delete 33 and 22. SQL> col rowid format a20; SQL> col multilist04 format a50; SQL> select rowid, multilist04 from page_two; ROWID MULTILIST04 -------------------- -------------------------------------------------- AAB+UrADfAAAAhUAAI ,6295728,6295729,6295730,6295731, AAB+UrADfAAAAhUAAJ ,1111,6295728,6295729,6295730,6295731, AAB+UrADfAAAAhUAAK ,6295728,111,6295729,6295730,6295731, AAB+UrADfAAAAhUAAL ,6295728,6295729,6295730,6295731,22, AAB+UrADfAAAAhUAAM ,6295728,33,6295729,6295730,6295731,22, SQL> select id, encode_name from manufacturers where id in (1111,11,22,33); No rows selected SQL> Solution As there is no existing SPLIT function or related in PL/SQL, I should program it by myself. I code Split intermediate function which is used to get the token value between current splitter and next splitter. Next program is main entry point, it get each column value from page_two.multilist04, process each row based on cursor. When it get each multilist04 value, it uses above Split function to get each token string stored to singValue variant, then check if it exists in manufacturers.id. If not found, set fixFlag to 1, pending to be deleted.

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >