Search Results

Search found 25123 results on 1005 pages for 'domain model'.

Page 573/1005 | < Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >

  • Proper way to create and work with a subdomain?

    - by Genadinik
    My site got effected by Panda, and I am trying to see if making a subdomain would work. The site is comehike.com, and I created a subdomain which is currently empty at hiking.comehike.com I have a directory /outdoors that has some high quality hand-written articles. I want to put those into the new subdomain to see what would happen. My questions are: Should I just copy and paste the files for those pages into the new subdomain's folder, and just change all the links in all my pages from the original domain to the new subdomain? Should I just do a 301 redirect to the new subdomain? Since test.site.com and www.site.com are different domains, will the new page have to start from scratch in terms of Pagerank, and its rankings in the SERPs?

    Read the article

  • Design Pattern for Complex Data Modeling

    - by Aaron Hayman
    I'm developing a program that has a SQL database as a backing store. As a very broad description, the program itself allows a user to generate records in any number of user-defined tables and make connections between them. As for specs: Any record generated must be able to be connected to any other record in any other user table (excluding itself...the record, not the table). These "connections" are directional, and the list of connections a record has is user ordered. Moreover, a record must "know" of connections made from it to others as well as connections made to it from others. The connections are kind of the point of this program, so there is a strong possibility that the number of connections made is very high, especially if the user is using the software as intended. A record's field can also include aggregate information from it's connections (like obtaining average, sum, etc) that must be updated on change from another record it's connected to. To conserve memory, only relevant information must be loaded at any one time (can't load the entire database in memory at load and go from there). I cannot assume the backing store is local. Right now it is, but eventually this program will include syncing to a remote db. Neither the user tables, connections or records are known at design time as they are user generated. I've spent a lot of time trying to figure out how to design the backing store and the object model to best fit these specs. In my first design attempt on this, I had one object managing all a table's records and connections. I attempted this first because it kept the memory footprint smaller (records and connections were simple dicts), but maintaining aggregate and link information between tables became....onerous (ie...a huge spaghettified mess). Tracing dependencies using this method almost became impossible. Instead, I've settled on a distributed graph model where each record and connection is 'aware' of what's around it by managing it own data and connections to other records. Doing this increases my memory footprint but also let me create a faulting system so connections/records aren't loaded into memory until they're needed. It's also much easier to code: trace dependencies, eliminate cycling recursive updates, etc. My biggest problem is storing/loading the connections. I'm not happy with any of my current solutions/ideas so I wanted to ask and see if anybody else has any ideas of how this should be structured. Connections are fairly simple. They contain: fromRecordID, fromTableID, fromRecordOrder, toRecordID, toTableID, toRecordOrder. Here's what I've come up with so far: Store all the connections in one big table. If I do this, either I load all connections at once (one big db call) or make a call every time a user table is loaded. The big issue here: the size of the connections table has the potential to be huge, and I'm afraid it would slow things down. Store in separate tables all the outgoing connections for each user table. This is probably the worst idea I've had. Now my connections are 'spread out' over multiple tables (one for each user table), which means I have to make a separate DB called to each table (or make a huge join) just to find all the incoming connections for a particular user table. I've avoided making "one big ass table", but I'm not sure the cost is worth it. Store in separate tables all outgoing AND incoming connections for each user table (using a flag to distinguish between incoming vs outgoing). This is the idea I'm leaning towards, but it will essentially double the total DB storage for all the connections (as each connection will be stored in two tables). It also means I have to make sure connection information is kept in sync in both places. This is obviously not ideal but it does mean that when I load a user table, I only need to load one 'connection' table and have all the information I need. This also presents a separate problem, that of connection object creation. Since each user table has a list of all connections, there are two opportunities for a connection object to be made. However, connections objects (designed to facilitate communication between records) should only be created once. This means I'll have to devise a common caching/factory object to make sure only one connection object is made per connection. Does anybody have any ideas of a better way to do this? Once I've committed to a particular design pattern I'm pretty much stuck with it, so I want to make sure I've come up with the best one possible.

    Read the article

  • HSSFS Part 3: SQL Saturday is Awesome! And DEFAULT_DOMAIN(), and how I found it

    - by Most Valuable Yak (Rob Volk)
    Just a quick post I should've done yesterday but I was recovering from SQL Saturday #48 in Columbia, SC, where I went to some really excellent sessions by some very smart experts.  If you have not yet attended a SQL Saturday, or its been more than 1 month since you last did, SIGN UP NOW! While searching the OBJECT_DEFINITION() of SQL Server system procedures I stumbled across the DEFAULT_DOMAIN() function in xp_grantlogin and xp_revokelogin.  I couldn't find any information on it in Books Online, and it's a very simple, self-explanatory function, but it could be useful if you work in a multi-domain environment.  It's also the kind of neat thing you can find by using this query: SELECT OBJECT_SCHEMA_NAME([object_id]) object_schema, name FROM sys.all_objects WHERE OBJECT_DEFINITION([object_id]) LIKE '%()%'  ORDER BY 1,2 I'll post some elaborations and enhancements to this query in a later post, but it will get you started exploring the functional SQL Server sea. UPDATE: I goofed earlier and said SQL Saturday #46 was in Columbia. It's actually SQL Saturday #48, and SQL Saturday #46 was in Raleigh, NC.

    Read the article

  • Sending a small number of targeted emails, is it spamming?

    - by Alex Mor
    I have a directory website and I want to send focused emails, a small amount, less than 50 a month, to some of the businesses on my directory that get many visitors. The intention is to let them now many people are viewing there page and encourage them to update it and post information on it. How can I send this small number of emails without being targeted as spam? Also, should I send it from an email with the websites domain or will it better to send from a personal email? that way at least of email is tagged as spam sometimes it won't hurt the website's reputation, is this true?

    Read the article

  • List of drivers for Samsung NP300E5Z-S08IN

    - by deostroll
    I am looking for a list of drivers for my samsung model mentioned in the title. For specs please visit the website. The laptop came with a version of free-dos installed. I overwrote that one with windows 7 entirely. The laptop also had shipped with it a software cd which had some driver software. Below are the list of software; I want to know the list of equivalent software I can get for ubuntu from the repo. Would want the list for Ubuntu 12.04 Chipset driver Intel ME Interface Driver Intel Rapid Storate Technology Graphics Driver NVidia graphics driver sound driver Lan driver wireless lan driver bluetooth driver touchpad driver Ps: don't forget to check the website for the specs

    Read the article

  • Carpool logical architecture

    - by enrmarc
    I'm designing a carpool system (drivers can publish their routes and passengers can subscribe to them) with WebServices(axis2) and Android clients (ksoap2). I have been having problems with the logical architecture of the system and I wondered if this architecture is fine. And another question: for that architecture (if it is ok), how would be the packages structure? I suppose something like that: (In android) package org.carpool.presentation *All the activities here (and maybe mvc pattern) (In the server) package org.carpool.services *Public interfaces (for example: register(User user), publishRoute(Route route) ) package org.carpool.domain *Pojos (for example: User.java, Route.java, etc) package org.carpool.persistence *Dao Interface and implementation (jdbc or hibernate)

    Read the article

  • SEO indexing with dynamic titles, keywords and description

    - by Andrea Turri
    I'm working on a worldwide website (all in one single domain) so I'm wondering to create dynamic titles, descriptions, keywords and headings for each location. What I'm doing is to get information from the IP of the user and show for example a dynamic title: var userCity = codeToGetCityFromIP; <title>Welcome to userCity</title> // and same for description, keywords and headings... Obviously the code is different... I'd like to know if it is a good solution to create multiple SEO indexing based on cities? I'm also using GeoLocation and I do same using the returned values from it. I'm doing right or there are more effective ways to indexing in different countries and cities without create multiple website for each city of the world? Thanks.

    Read the article

  • The Information Driven Value Chain - Part 1

    - by Paul Homchick
    One hundred years ago, there were places on Earth that no man had ever seen.  Today, a man standing in one of those places can instantaneously communicate with someone who may be strolling down the street on his way to lunch half way around the globe.  Our world is shrinking and becoming virtual. It is a world of incredible bounty and speed where we can get a product delivered to us anywhere on earth within a day or two. However, this world is also one of challenge where volatility, uncertainty, risk and chaos are our daily companions. To prosper amid the realities of this new world, the enterprise needs a business model. Globalization and instant communications demand greater operational flexibility than ever before. Extended supply chains have elevated the management of risk to a central concern, and regulatory demands from multiple governments place an increasing burden of compliance on companies. Finally, the speed of today's business requires continuous innovation to keep from falling behind the global competition.

    Read the article

  • GNOME Shell Overview animation is slow on my NVIDIA 320M

    - by AllanCaeg
    I'm running Ubuntu 10.10 on my MacBook Air 11" (late 2010 model 3,1). Compiz runs fine, as well as most of GNOME Shell's animations. The animation for switching to and from GNOME Shell overview is just very slow. Unfortunately, it's the most common animation on Shell. I already applied cd ~/gnome-shell/source/gnome-shell $ curl http://bugzilla-attachments.gnome.org/attachment.cgi?id=157326 > shell-animations-nvidia.patch $ git am shell-animations-nvidia.pat that I found from http://live.gnome.org/GnomeShell/SwatList , but the issue's still here. How do I fix this?

    Read the article

  • What are the files pushed to MDS?

    - by harsh.singla
    All files which are under AIAComponents will move to MDS. This contains EnterpriseObjectLibrary, EnterpriseBusinessServiceLibrary, ApplicationObjectLibrary, ApplicationBusinessServiceLibrary, B2BObjectLibrary, ExtensionServiceLibrary, and UtilityArtifacts. Also there are some common transformation (.xsl) files, which are kept under Transformations folder, moved to MDS. AIAConfigurationProperties.xml file will be there in MDS. Every cross reference (.xref) object will also be there. Every Domain value Map (.dvm) will also be there. Common fault policy, which by default included in composite during composite generation, if a user does not choose to customize fault policy. All these files are location under AIAMetaData directory and then placed in their respective folders. We are planning to put Error handling and BSR systems related data also to MDS.

    Read the article

  • Odd company release cycle: Go Distributed Source Control?

    - by MrLane
    sorry about this long post, but I think it is worth it! I have just started with a small .NET shop that operates quite a bit differently to other places that I have worked. Unlike any of my previous positions, the software written here is targetted at multiple customers and not every customer gets the latest release of the software at the same time. As such, there is no "current production version." When a customer does get an update, they also get all of the features added to he software since their last update, which could be a long time ago. The software is highly configurable and features can be turned on and off: so called "feature toggles." Release cycles are very tight here, in fact they are not on a shedule: when a feature is complete the software is deployed to the relevant customer. The team only last year moved from Visual Source Safe to Team Foundation Server. The problem is they still use TFS as if it were VSS and enforce Checkout locks on a single code branch. Whenever a bug fix gets put out into the field (even for a single customer) they simply build whatever is in TFS, test the bug was fixed and deploy to the customer! (Myself coming from a pharma and medical devices software background this is unbeliveable!). The result is that half baked dev code gets put into production without being even tested. Bugs are always slipping into release builds, but often a customer who just got a build will not see these bugs if they don't use the feature the bug is in. The director knows this is a problem as the company is starting to grow all of a sudden with some big clients coming on board and more smaller ones. I have been asked to look at source control options in order to eliminate deploying of buggy or unfinished code but to not sacrifice the somewhat asyncronous nature of the teams releases. I have used VSS, TFS, SVN and Bazaar in my career, but TFS is where most of my experience has been. Previously most teams I have worked with use a two or three branch solution of Dev-Test-Prod, where for a month developers work directly in Dev and then changes are merged to Test then Prod, or promoted "when its done" rather than on a fixed cycle. Automated builds were used, using either Cruise Control or Team Build. In my previous job Bazaar was used sitting on top of SVN: devs worked in their own small feature branches then pushed their changes to SVN (which was tied into TeamCity). This was nice in that it was easy to isolate changes and share them with other peoples branches. With both of these models there was a central dev and prod (and sometimes test) branch through which code was pushed (and labels were used to mark builds in prod from which releases were made...and these were made into branches for bug fixes to releases and merged back to dev). This doesn't really suit the way of working here, however: there is no order to when various features will be released, they get pushed when they are complete. With this requirement the "continuous integration" approach as I see it breaks down. To get a new feature out with continuous integration it has to be pushed via dev-test-prod and that will capture any unfinished work in dev. I am thinking that to overcome this we should go down a heavily feature branched model with NO dev-test-prod branches, rather the source should exist as a series of feature branches which when development work is complete are locked, tested, fixed, locked, tested and then released. Other feature branches can grab changes from other branches when they need/want, so eventually all changes get absorbed into everyone elses. This fits very much down a pure Bazaar model from what I experienced at my last job. As flexible as this sounds it just seems odd to not have a dev trunk or prod branch somewhere, and I am worried about branches forking never to re-integrate, or small late changes made that never get pulled across to other branches and developers complaining about merge disasters... What are peoples thoughts on this? A second final question: I am somewhat confused about the exact definition of distributed source control: some people seem to suggest it is about just not having a central repository like TFS or SVN, some say it is about being disconnected (SVN is 90% disconnected and TFS has a perfectly functional offline mode) and others say it is about Feature Branching and ease of merging between branches with no parent-child relationship (TFS also has baseless merging!). Perhaps this is a second question!

    Read the article

  • What is the supposed productivity gain of dynamic typing?

    - by hstoerr
    I often heard the claim that dynamically typed languages are more productive than statically typed languages. What are the reasons for this claim? Isn't it just tooling with modern concepts like convention over configuration, the use of functional programming, advanced programming models and use of consistent abstractions? Admittedly there is less clutter because the (for instance in Java) often redundant type declarations are not needed, but you can also omit most type declarations in statically typed languages that usw type inference, without loosing the other advantages of static typing. And all of this is available for modern statically typed languages like Scala as well. So: what is there to say for productivity with dynamic typing that really is an advantage of the type model itself?

    Read the article

  • External USB hard drive makes noises when the computer is turned off

    - by Amir Adar
    I have an external USB hard drive of 500GB. I can't tell what model it is exactly, as nothing specific is written on it and I don't have the box anymore. I use it as a backup disk. It works absolutely fine when the computer is turned on: no problems with writing or reading, and everything is done in dead silence. However, if I turn the computer off and the disk is still connected, it stays on and makes clicking noises. For that reason I only connect it when I need to back up or restore. Does that mean there's a problem with the disk, or with some preferences in the system itself? Or something else?

    Read the article

  • Best/Easiest Technology for a RESTful webservice [closed]

    - by user1751547
    So I'm going to be creating a phone app + website that will need to utilize a web service. Webservices are completely outside my domain so I'm not entirely sure where to start. Does anybody have any suggestions on the technology stack I should use? (mainly in terms of ease of use and reliability) So far what I've looked at are: RoR Python + Django + TastyPie Python + Flask Microsoft WCF 3.5 PHP + some framework I would rather not do anything with Java I'm leaning towards the Python + Django + TastyPie route as it seems like it would be easy to get up and going and learn in general. My only concern with it is the reliability of the libraries (feature breaking updates, abandonment, etc). Also I would prefer to create the website with the same framework so I wouldn't have to deal with learning and using two different ones. Any advice would be helpful, thanks.

    Read the article

  • Where To Begin To Make A Website

    - by lolyoshi
    I'm a newbie in web programming. I haven't done anything that relates to website before. Now, my new task is creating a website using Java, Jsp, HTML, CSS, mySQL, Apache and Spring Framework (MVC model). I want to know what I should research if I want my website has the function as post entries, comment entries, delete entries, edit entries, etc as a forum? Which I need to know beside above things? I don't know how to update my website automatically when there're changes in website as the top view products, the best products. I don't think I'll input or change them manually. So, which tools or language can support that? Thank for advance

    Read the article

  • Easing the Journey to the Private Cloud with Oracle Consulting

    - by MichaelM-Oracle
    By Sanjai Marimadaiah, Senior Director, Strategy & Business Development – Cloud Solutions, Oracle Consulting Services Business leaders are now leading the charge on how their firms can profit from cloud solutions. Agility and innovation are becoming the primary drivers of the business case for the cloud, even more than the anticipated cost savings. Leaders need to find the right strategy and optimize the use of cloud-based applications across their enterprise-computing infrastructure. The Problem – Current State With prevalent IT practices, many organizations find that they run multiple IT solutions serving similar business needs. This has led to the proliferation of technology stacks, for example: Oracle 10g on Sun T4 running Solaris 9; Oracle 11g on Exadata running Linux; or Oracle 12c on commodity x86 servers. This variance has a huge impact on an organization’s agility and expenses, and requires IT professionals with varied skills as well as on-going training for different systems and tools. Fortunately there is a practical business strategy to overcome this unneeded redundancy. Thus begins a journey to the right cloud computing solution. The Solution – Cloud Services from Oracle Consulting Services (OCS) Oracle Consulting Services (OCS ) works closely with our clients as trusted advisors to proactively respond to business needs and IT concerns. OCS understands that making the transition to cloud solutions begins with a strategic conversation, based on its deep expertise for successfully completing private cloud service engagements with several companies. For a journey to the cloud, Oracle Consulting Services leads the client through four phases– standardization, consolidation, service delivery, and enterprise cloud – to achieve optimal returns. Phase 1 - Standardization Oracle Consulting Services (OCS) works with clients to evaluate their business requirements and propose a set of standard solutions stacks for various IT solutions. This is an opportune time to evaluate cloud ready solutions, such as Oracle 12c, Oracle Exadata, and the Oracle Database Appliance (ODA). The OCS consultants, together with the delivery team, then turn to upgrading and migrating existing solution stacks to standardized offerings. OCS has the expertise and tools to complete this stage in a fraction of the time required by other IT services companies. Clients quickly realize cost savings in tools, processes, and type/number of resources required. This standardization also improves agility of the IT organizations and their abilities to respond to the needs of various business units. Phase 2 - Consolidation During the consolidation phase, OCS consultants programmatically consolidate hundreds of databases into a smaller number of servers to improve utilization, reduce floor space, and optimize maintenance costs. Consolidation helps clients realize huge savings in CapEx investments and shrink OpEx costs. The use of engineered systems, such as Oracle Exadata, greatly reduces the client’s risk of moving to a new solution stack. OCS recommends clients to pursue Phase 1 (Standardization) and Phase 2 (Consolidation) simultaneously to reduce the overall time, effort, and expense of the cloud journey. Phase 3 - Service Delivery Once a client is on a path of standardization and consolidation, OCS consultants create Service Catalogues based on the SLAs requirements and the criticality of the solutions. The number and types of Service Catalogues (Platinum, Gold, Silver, Bronze, etc.) vary from client to client. OCS consultants also implement a variety of value-added cloud solutions, including monitoring, metering, and charge-back solutions. At this stage, clients are able to achieve a high level of understanding in their cloud journey. Their IT organizations are operating efficiently and are more agile in responding to the needs of business units. Phase 4 - Enterprise Cloud In the final phase of the cloud journey, the economics of the IT organizations change. Business units can request services on-demand; applications can be deployed and consumed on a pay-as-you-go model. OCS has the expertise and capabilities to establish processes, programs, and solutions required for IT organizations to transform how they interact with business units. The Promise of Cloud Solutions Depending the size and complexity of their business model, some clients are able to abbreviate some phases of their cloud journey. Cloud solutions are still evolving and there is rapid pace of innovation to transform how IT organizations operate. The lesson is clear. Cloud solutions hold a lot of promise for business agility. Business leaders can now leverage an additional set of capabilities and services. They can ramp up their pace of innovation. With cloud maturity, they can compete more effectively in their respective markets. But there are certainly challenges ahead. A skilled consulting services partner can play a pivotal role as a trusted advisor in the successful adoption of cloud solutions. Oracle Consulting Services has expertise and a portfolio of services to help clients succeed on their journey to the cloud.

    Read the article

  • Mobile miscellany; 14 April

    Some updates on a few developing stories in the mobile space. First of all, the Palm acquisition. Right now the chaos is only increasing. Mobile consultant Tomi Ahonen gives an excellent overview of all major mobile players and why they would want to acquire Palm — or not. His bet is on Lenovo, with HTC running second. See also this piece which pinpoints Palm’s distribution model (or rather lack thereof) as a serious problem. The Sprint-exclusivity in the US was a bad idea. Meanwhile, Huawei and...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Some New .NET Downloads and Resources

    - by Kevin Grossnicklaus
    Last week I was fortunate enough to spend time in Redmond on Microsoft’s campus for the 2011 Microsoft MVP Summit.  It was great to hang out with a number of old friends and get the opportunity to talk tech with the various product teams up at Microsoft.  The weather wasn’t exactly sunny but Microsoft always does a great job with the Summit and everyone had a blast (heck, I even got to run the bases at SafeCo field) While much of what we saw is covered under NDA, there a ton of great things in the pipeline from Microsoft and many things that are already available (or just became so) that I wasn’t necessarily aware of.  The purpose of this post is to share some of the info I learned on resources and tools available to .NET developers today.  Please let me know if you have any questions (or if you know of something else cool which might benefit others). Enjoy! Visual Studio 2010 SP1 Microsoft has issued the RTM release of Visual Studio 2010 SP1.  You can download the full SP1 on MSDN as of today (March 10th to the general public) and take advantage of such things as: Silverlight 4 is included in the box (as opposed to a separate install) Silverlight 4 Profiling WCF RIA Services SP1 Intellitrace for 64-bit and SharePoint ASP.NET now easily supports IIS Express and SQL CE Want a description of all that’s new beyond the above biased list (which arguably only contains items I think are important)?  Check out this KB article. Portable Library Tools CTP Without much fanfare Microsoft has released a CTP of a new add-in to Visual Studio 2010 which simplifies code sharing between projects targeting different runtimes (i.e. Silverlight, WPF, Win7 Phone, XBox).   With this Add-In installed you can add a new project of type “Portable Library” and specify which platforms you wish to target.  Once that is done, any code added to this library will be limited to use only features which are common to all selected frameworks.  Other projects can now reference this portable library and be provided assemblies custom built to their environment.  This greatly simplifies the current process of sharing linked files between platforms like WPF and Silverlight.  You can find out more about this CTP and how it works on this great blog post. Visual Studio Async CTP Microsoft has also released a CTP of a set of language and framework enhancements to provide a much more powerful asynchronous programming model.   Due to the focus on async programming in all types of platforms (and it being the ONLY option in Silverlight and Win7 phone) a move towards a simpler and more understandable model is always a good thing. This CTP (called Visual Studio Async CTP) can be downloaded here.  You can read more about this CTP on this blog post. MSDN Code Samples Gallery Microsoft has also launched new code samples gallery on their MSDN site: http://code.msdn.microsoft.com/.   This site allows you to easily search for small samples of code related to a particular technology or platform.  If a sample of code you are looking for is not found, you can request one via the site and other developers can see your request and provide a sample to the site to suit your needs.  You can also peruse requested samples and, if you find a scenario where you can provide value, upload your own sample for the benefit of others.  Samples are packaged into the VS .vsix format and include any necessary references/dependencies.  By using .vsix as the deployment mechanism, as samples are installed from the site they are kept in your Visual Studio 2010 Samples Gallery and kept for your future reference. If you get a chance, check out the site and see how it is done.  Although a somewhat simple concept, I was very impressed with their implementation and the way they went about trying to suit a need.  I’ll definitely be looking there in the future as need something or want to share something. MSDN Search Capabilities Another item I learned recently and was not aware of (that might seem trivial to some) is the power of the MSDN site’s search capabilities.  Between the Code Samples Gallery described above and the search enhancements on MSDN, Microsoft is definitely investing in their platform to help provide developers of all skill levels the tools and resources they need to be successful. What do I mean by the MSDN search capability and why should you care? If you go to the MSDN home page (http://msdn.microsoft.com) and use the “Search MSDN with Big” box at the very top of the page you will see some very interesting results.  First, the search actually doesn’t just search the MSDN library it searches: MSDN Library All Microsoft Blogs CodePlex StackOverflow Downloads MSDN Magazine Support Knowledgebase (I’m not sure it even ends there but the above are all I know of) Beyond just searching all the above locations, the results are formatted very nicely to give some contextual information based on where the result came from.  For example, if a keyword search returned results from CodePlex, each row in the search results screen would include a large amount of information specific to CodePlex such as: Looking at the above results immediately tells you everything from the page views to the CodePlex ratings.  All in all, knowing that this much information is indexed and available from a single search location will lead me to utilize this as one of my initial searches for development information.

    Read the article

  • Resetting Google's data for the website

    - by Giorgi
    I own a domain which I was using for experimenting with different platforms and content management systems. I really did not care about seo while I was building the website so I guess my website has quite a 'bad reputation' for Google search. My website is almost finished and I am planning to launch it soon but would like to tell Google to forget everything it nows about it and start everything over. What's the correct way for doing it? I found Requesting reconsideration of site but I'm not sure if that's the best way to do it. Any advice?

    Read the article

  • IRM Item Codes &ndash; what are they for?

    - by martin.abrahams
    A number of colleagues have been asking about IRM item codes recently – what are they for, when are they useful, how can you control them to meet some customer requirements? This is quite a big topic, but this article provides a few answers. An item code is part of the metadata of every sealed document – unless you define a custom metadata model. The item code is defined when a file is sealed, and usually defaults to a timestamp/filename combination. This time/name combo tends to make item codes unique for each new document, but actually item codes are not necessarily unique, as will become clear shortly. In most scenarios, item codes are not relevant to the evaluation of a user’s rights - the context name is the critical piece of metadata, as a user typically has a role that grants access to an entire classification of information regardless of item code. This is key to the simplicity and manageability of the Oracle IRM solution. Item codes are occasionally exposed to users in the UI, but most users probably never notice and never care. Nevertheless, here is one example of where you can see an item code – when you hover the mouse pointer over a sealed file. As you see, the item code for this freshly created file combines a timestamp with the file name. But what are item codes for? The first benefit of item codes is that they enable you to manage exceptions to the policy defined for a context. Thus, I might have access to all oracle – internal files - except for 2011_03_11 13:33:29 Board Minutes.sdocx. This simple mechanism enables Oracle IRM to provide file-by-file control where appropriate, whilst offering the scalability and manageability of classification-based control for the majority of users and content. You really don’t want to be managing each file individually, but never say never. Item codes can also be used for the opposite effect – to include a file in a user’s rights when their role would ordinarily deny access. So, you can assign a role that allows access only to specified item codes. For example, my role might say that I have access to precisely one file – the one shown above. So how are item codes set? In the vast majority of scenarios, item codes are set automatically as part of the sealing process. The sealing API uses the timestamp and filename as shown, and the user need not even realise that this has happened. This automatically creates item codes that are for all practical purposes unique - and that are also intelligible to users who might want to refer to them when viewing or assigning rights in the management UI. It is also possible for suitably authorised users and applications to set the item code manually or programmatically if required. Setting the item code manually using the IRM Desktop The manual process is a simple extension of the sealing task. An authorised user can select the Advanced… sealing option, and will see a dialog that offers the option to specify the item code. To see this option, the user’s role needs the Set Item Code right – you don’t want most users to give any thought at all to item codes, so by default the option is hidden. Setting the item code programmatically A more common scenario is that an application controls the item code programmatically. For example, a document management system that seals documents as part of a workflow might set the item code to match the document’s unique identifier in its repository. This offers the option to tie IRM rights evaluation directly to the security model defined in the document management system. Again, the sealing application needs to be authorised to Set Item Code. The Payslip Scenario To give a concrete example of how item codes might be used in a real world scenario, consider a Human Resources workflow such as a payslips. The goal might be to allow the HR team to have access to all payslips, but each employee to have access only to their own payslips. To enable this, you might have an IRM classification called Payslips. The HR team have a role in the normal way that allows access to all payslips. However, each employee would have an Item Reader role that only allows them to access files that have a particular item code – and that item code might match the employee’s payroll number. So, employee number 123123123 would have access to items with that code. This shows why item codes are not necessarily unique – you can deliberately set the same code on many files for ease of administration. The employees might have the right to unseal or print their payslip, so the solution acts as a secure delivery mechanism that allows payslips to be distributed via corporate email without any fear that they might be accessed by IT administrators, or forwarded accidentally to anyone other than the intended recipient. All that remains is to ensure that as each user’s payslip is sealed, it is assigned the correct item code – something that is easily managed by a simple IRM sealing application. Each month, an employee’s payslip is sealed with the same item code, so you do not need to keep amending the list of items that the user has access to – they have access to all documents that carry their employee code.

    Read the article

  • Batch file to Delete Old Virtual Directories.

    - by Michael Freidgeim
    On some servers we have many old Virtual Directories created for previous versions of our application. IIS user interface allows to delete only one in a time. Fortunately we can use IIS scripts as described in How to manage Web sites and Web virtual directories by using command-line scripts in IIS 6.0 I've created batch file DeleteOldVDirs.cmd rem http://support.microsoft.com/kb/816568 rem syntax: iisvdir /delete WebSite [/Virtual Path]Name [/s Computer [/u [Domain\]User /p Password]] REM list all directories and create batch of deletes iisvdir /query "Default Web Site" echo "Enter Ctrl-C  if you want to stop deleting" Pause iisvdir /delete "Default Web Site/VDirName1" iisvdir /delete "Default Web Site/VDirName2" ...   If the name of WebSite or Virtual directory contain spaces(e.g  "Default Web Site"), don't forget to use double quotes. Note that the batch doesn't delete physical directories from flie system.You need to delete them using Windows Explorer, but it does support multiple selection!

    Read the article

  • Sound issues after trying everything

    - by Lerp
    I cannot get my sound working properly, no matter what I do, there's always some problem. It's very annoying as it's the only thing preventing me from making Ubuntu my main OS. At the moment my sound always plays through both my speakers and my headphones regardless except the sound through the headphones is crackly. It is also a bit quiet even though everything is maxed. I've managed to improve the situation to a point where the sound out of my speakers is perfect but I have none at all from my headphones. I do have two connectors listed in the sound settings but regardless of which one is selected it always plays through the speakers. I think this might have something to do with the fact that my speakers are plugging into the front of my computer, typically the headphone jack, and my headphones are plugging into the back but when I try disconnecting the speakers from the front there is still no sound from the headphones. I fixed the speaker sound by going through the sound settings and making sure they were all set to 100% then rebooting. Things I have tried: Maxing everything and unmuting everything in alsamixer Uninstalling pulseaudio Making gstreamer use only alsa via gstreamer-properties. This worked with the sound test button including independent sound between headphones and speakers but when I reset the computer it no longer worked. So I tried setting it manually in gconf-editor which didn't work either. Reinstalling alsa and pulseaudio Setting the model in /etc/modprobe.d/alsa-base.conf to 6stack and 6stack-dig neither worked. Upgrading to 12.10 Here's some command output to help you diagnose my problem. aplay -l **** List of PLAYBACK Hardware Devices **** card 0: Intel [HDA Intel], device 0: AD198x Analog [AD198x Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 1: AD198x Digital [AD198x Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 2: AD198x Headphone [AD198x Headphone] Subdevices: 1/1 Subdevice #0: subdevice #0 sudo lshw -C sound *-multimedia description: Audio device product: 82801JI (ICH10 Family) HD Audio Controller vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=snd_hda_intel latency=0 resources: irq:70 memory:f7ff8000-f7ffbfff cat /proc/asound/card*/codec* | grep "Codec" Codec: Analog Devices AD1989B cat /etc/modprobe.d/alsa-base.conf # autoloader aliases install sound-slot-0 /sbin/modprobe snd-card-0 install sound-slot-1 /sbin/modprobe snd-card-1 install sound-slot-2 /sbin/modprobe snd-card-2 install sound-slot-3 /sbin/modprobe snd-card-3 install sound-slot-4 /sbin/modprobe snd-card-4 install sound-slot-5 /sbin/modprobe snd-card-5 install sound-slot-6 /sbin/modprobe snd-card-6 install sound-slot-7 /sbin/modprobe snd-card-7 # Cause optional modules to be loaded above generic modules install snd /sbin/modprobe --ignore-install snd $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-ioctl32 ; /sbin/modprobe --quiet --use-blacklist snd-seq ; } # # Workaround at bug #499695 (reverted in Ubuntu see LP #319505) install snd-pcm /sbin/modprobe --ignore-install snd-pcm $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-pcm-oss ; : ; } install snd-mixer /sbin/modprobe --ignore-install snd-mixer $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-mixer-oss ; : ; } install snd-seq /sbin/modprobe --ignore-install snd-seq $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq-midi ; /sbin/modprobe --quiet --use-blacklist snd-seq-oss ; : ; } # install snd-rawmidi /sbin/modprobe --ignore-install snd-rawmidi $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq-midi ; : ; } # Cause optional modules to be loaded above sound card driver modules install snd-emu10k1 /sbin/modprobe --ignore-install snd-emu10k1 $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-emu10k1-synth ; } install snd-via82xx /sbin/modprobe --ignore-install snd-via82xx $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq ; } # Load saa7134-alsa instead of saa7134 (which gets dragged in by it anyway) install saa7134 /sbin/modprobe --ignore-install saa7134 $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist saa7134-alsa ; : ; } # Prevent abnormal drivers from grabbing index 0 options bt87x index=-2 options cx88_alsa index=-2 options saa7134-alsa index=-2 options snd-atiixp-modem index=-2 options snd-intel8x0m index=-2 options snd-via82xx-modem index=-2 options snd-usb-audio index=-2 options snd-usb-caiaq index=-2 options snd-usb-ua101 index=-2 options snd-usb-us122l index=-2 options snd-usb-usx2y index=-2 # Ubuntu #62691, enable MPU for snd-cmipci options snd-cmipci mpu_port=0x330 fm_port=0x388 # Keep snd-pcsp from being loaded as first soundcard options snd-pcsp index=-2 # Keep snd-usb-audio from beeing loaded as first soundcard options snd-usb-audio index=-2 options snd-hda-intel model=6stack

    Read the article

  • Introdução ao NHibernate on TechDays 2010

    - by Ricardo Peres
    I’ve been working on the agenda for my presentation titled Introdução ao NHibernate that I’ll be giving on TechDays 2010, and I would like to request your assistance. If you have any subject that you’d like me to talk about, you can suggest it to me. For now, I’m thinking of the following issues: Domain Driven Design with NHibernate Inheritance Mapping Strategies (Table Per Class Hierarchy, Table Per Type, Table Per Concrete Type, Mixed) Mappings (hbm.xml, NHibernate Attributes, Fluent NHibernate, ConfORM) Supported querying types (ID, HQL, LINQ, Criteria API, QueryOver, SQL) Entity Relationships Custom Types Caching Interceptors and Listeners Advanced Usage (Duck Typing, EntityMode Map, …) Other projects (NHibernate Validator, NHibernate Search, NHibernate Shards, …) ASP.NET Integration ASP.NET Dynamic Data Integration WCF Data Services Integration Comments?

    Read the article

  • Multiple 301 redirects, do search engines/viewers see them all?

    - by Karim
    I've put in place lots of different 301 rules to deal with numerous url changes. And for certain URLS there are 3-4 different 301 redirects landing the visitors to the new URL. I heard that 301 loses pagerank/linkjuice. ALl the 301 are onsite for the same domain. With a mix of php 301s and htaccess 301s. so for instance articles/news.php?id=2 --- articles/blog.php?id=2 [filename change] articles/* --- /* [subdir to root] /blog.php?id=2 --- /title-of-post [mod rewrite url change] so if you were to visit /articles/news.php?id=2 there will be two 301 redirects until you land on the /yellow-wellington-boots/, my question is does google see the intermediate redirects, or just the final page the 301's redirect to.

    Read the article

  • How to display a dependent list box disabled if no child data exist

    - by frank.nimphius
    A requirement on OTN was to disable the dependent list box of a model driven list of value configuration whenever the list is empty. To disable the dependent list, the af:selectOneChoice component needs to be refreshed with every value change of the parent list, which however already is the case as the list boxes are already dependent. When you create model driven list of values as choice lists in an ADF Faces page, two ADF list bindings are implicitly created in the PageDef file of the page that hosts the input form. At runtime, a list binding is an instance of FacesCtrlListBinding, which exposes getItems() as a method to access a list of available child data (java.util.List). Using Expression Language, the list is accessible with #{bindings.list_attribute_name.items} To dynamically set the disabled property on the dependent af:selectOneChoice component, however, you need a managed bean that exposes the following two methods //empty – but required – setter method public void setIsEmpty(boolean isEmpty) {} //the method that returns true/false when the list is empty or //has values public boolean isIsEmpty() {   FacesContext fctx = FacesContext.getCurrentInstance();   ELContext elctx = fctx.getELContext();   ExpressionFactory exprFactory =                          fctx.getApplication().getExpressionFactory();   ValueExpression vexpr =                       exprFactory.createValueExpression(elctx,                         "#{bindings.EmployeeId.items}",                       Object.class);   List employeesList = (List) vexpr.getValue(elctx);                        return employeesList.isEmpty()? true : false;      } If referenced from the dependent choice list, as shown below, the list is disabled whenever it contains no list data <! --  master list --> <af:selectOneChoice value="#{bindings.DepartmentId.inputValue}"                                  label="#{bindings.DepartmentId.label}"                                  required="#{bindings.DepartmentId.hints.mandatory}"                                   shortDesc="#{bindings.DepartmentId.hints.tooltip}"                                   id="soc1" autoSubmit="true">      <f:selectItems value="#{bindings.DepartmentId.items}" id="si1"/> </af:selectOneChoice> <! --  dependent  list --> <af:selectOneChoice value="#{bindings.EmployeeId.inputValue}"                                   label="#{bindings.EmployeeId.label}"                                      required="#{bindings.EmployeeId.hints.mandatory}"                                   shortDesc="#{bindings.EmployeeId.hints.tooltip}"                                   id="soc2" disabled="#{lovTestbean.isEmpty}"                                   partialTriggers="soc1">     <f:selectItems value="#{bindings.EmployeeId.items}" id="si2"/> </af:selectOneChoice>

    Read the article

< Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >