Search Results

Search found 31452 results on 1259 pages for 'database independent'.

Page 455/1259 | < Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >

  • How to do pragmatic high-level/meta-programming?

    - by Lenny222
    Imagine you have implemented the creation of a nice path-based star shape in Lisp. Then you discover Processing and you re-implement the whole code, because Processing/Java/Java2D is different. Then you want to tinker with libcinder, so you port your code to C++/Cairo. You are (re)writing a lot of boiler plate code, while the actual requirement "create a star shape" (or "create a path, moveto x y, lineto x y") has not changed. What are the options to encapsulate those implementation details? Some sort of pragmatic meta-programming? Maybe an expert system? How would you define your core business logic as language-independent as possible?

    Read the article

  • Addicted to Oil

    30 years ago, Brazil imported 80% of its oil. With a strong sense of purpose, Brazil invested heavily in bio-fuel technology and refocused its transportation energy towards a resource Brazil could manufacture internallysugar based ethanol. Today, Brazil uses flexible fuel vehicles that can run on gas, ethanol, or any combination of the two. It still has a mandate to be 100% independent of oil in 2011. Yes, Brazil still drills for oil, and they still use it - plenty of it. But at least they've had...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • File system layout for multiple build targets

    - by Yttrill
    I am seeking some ideas for how to build and install software with some parameters. These including target OS, target platform CPU details, debugging variant, etc. Some parts of the install are shared, such as documentation and many platform independent files, others are not, such as 64 and 32 bit libraries when these are separated and not together in a multi-arch library. On big networked platforms one often has multiple computers sharing some large server space, so there is actually cause to have even Windows and Unix binaries on the same disk. My product has already fixed an install philosophy of $INSTALL_ROOT/genericname/version/ so that multiple versions can coexist. The question is: how to manage the layout of all the other stuff?

    Read the article

  • What are the pros and cons of a non-fixed-interval update loop?

    - by akonsu
    I am studying various approaches to implementing a game loop and I have found this article. In the article the author implements a loop which, if the processing falls behind in time, skips frame renderings and just updates the game in a loop (the last variant called "Constant Game Speed independent of Variable FPS"). I do not understand why it is acceptable to call update_game() in a loop without making sure the update function is called at a particular interval. I do not see any value in doing this. I would think that in my game I want to be sure the game is updated periodically with a known period. So maybe it is worthwhile to have two threads, one would call update periodically, and the other one would redraw the game, also periodically? Would this be a good and practical approach? Of course I would need to synchronise the threads.

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • How do we install Unity-2D and dependencies offline?

    - by Takkat
    We have installed 11.04 32-bit on an old machine that has no internet connection and with a graphics card that is not suitable for running Compiz or Unity. Still, we would like to run Unity-2D on this machine. We are aware of answers to this Question. Sadly Keryx will not run on 11.04 32-bit because of unmet dependencies. Building an offline repository is not an option because of limited storage capacity. Is there any convenient other way to find, download, install, and eventually update unity-2 and all dependencies (preferably from an OS independent download path)?

    Read the article

  • Perm SSIS Developer Urgently Required

    - by blakmk
      Job Role To provide dedicated data services support to the company, by designing, creating, maintaining and enhancing database objects, ensuring data quality, consistency and integrity. Migrating data from various sources to central SQL 2008 data warehouse will be the primary function. Migration of data from bespoke legacy database’s to SQL 2008 data warehouse. Understand key business requirements, Liaising with various aspects of the company. Create advanced transformations of data, with focus on data cleansing, redundant data and duplication. Creating complex business rules regarding data services, migration, Integrity and support (Best Practices). Experience ·         Minimum 3 year SSIS experience, in a project or BI Development role and involvement in at least 3 full ETL project life cycles, using the following methodologies and tools o    Excellent knowledge of ETL concepts including data migration & integrity, focusing on SSIS. o    Extensive experience with SQL 2005 products, SQL 2008 desirable. o    Working knowledge of SSRS and its integration with other BI products. o    Extensive knowledge of T-SQL, stored procedures, triggers (Table/Database), views, functions in particular coding and querying. o    Data cleansing and harmonisation. o    Understanding and knowledge of indexes, statistics and table structure. o    SQL Agent – Scheduling jobs, optimisation, multiple jobs, DTS. o    Troubleshoot, diagnose and tune database and physical server performance. o    Knowledge and understanding of locking, blocks, table and index design and SQL configuration. ·         Demonstrable ability to understand and analyse business processes. ·         Experience in creating business rules on best practices for data services. ·         Experience in working with, supporting and troubleshooting MS SQL servers running enterprise applications ·         Proven ability to work well within a team and liaise with other technical support staff such as networking administrators, system administrators and support engineers. ·         Ability to create formal documentation, work procedures, and service level agreements. ·         Ability to communicate technical issues at all levels including to a non technical audience. ·         Good working knowledge of MS Word, Excel, PowerPoint, Visio and Project.   Location Based in Crawley with possibility of some remote working Contact me for more info: http://sqlblogcasts.com/blogs/blakmk/contact.aspx      

    Read the article

  • Box 2D Collision Question

    - by Farooq Arshed
    I am very new to Box 2D Physics world. I wanted to know how to collide 2 bodies when one is Dynamic and other is Kinematic. The whole Scenario is explained below: I have 3 balls in total. I want to balls to remain in their places and the third ball to be able to move. When the third ball hits the other two balls then they should move according to the speed and direction from which they were hit. My gravity of the world is 0 because I only want z-axis gravity. I would also like some one to point me towards some good tutorials regarding Box 2D basics which is language independent. I hope I have explained my scenario well. Thanks for the help in advance.

    Read the article

  • Shouldn't we count characters of code and comments instead of lines of code and comments? [closed]

    - by Gabriel
    Counting lines of code and comments is sometimes bogus, since most of what we write may be written in one or more lines, depending column count limitations, screen size, style and so forth. Since the commonly used languages (say C, C++, C# and Java) are free-form, wouldn't it be more clever to count characters instead? Edit: I'm not considering LOC-oriented programming where coders try to artificially match requirements by adding irrelevant comments or using multiple lines where less would be enough (or the opposite). I'm interested in better metrics that would be independent of coding style, to be used by honest programmers.

    Read the article

  • Typewriter Sounds

    - by Mr. Typing Sounds
    Since I switched to Ubuntu 12.04 I'd only missed one thing. A program which could launch typewriter sounds while typing. For instance, in Windows I used this: http://www.colorpilot.com/soundpilot.html for a long time. I learned then that this writing program: http://gottcode.org/focuswriter/ had the sounds but only for the program itself. However, sometimes I'm writing an email, writing on the web or doing more complex writing tasks in LibreOffice - all places where these long missed typing sounds don't apply. Does any of you know of any plans in the community of the sound bit - typing sounds - as an independent program or applet to be fetched in the Ubuntu Software Center soon? The Rhythm Of Creative Writing would really be helped then! ;-)

    Read the article

  • RAID controller dropping the wrong drive

    - by bramp
    I've been having an issue with 3ware 9500S-8 RAID 10, and I have contracted their tech support, but I wanted to hear the serverfault community's recommendations. Firstly, all my data is backuped and secure, so I don't mind blowing my RAID away if I have to. But let me describe the problem I've been seeing. A month ago, disk 6 dropped out of the RAID. It is mirrored with disk 7, so I wasn't that bothered. I went to the data centre and replaced it. When I got back to the office, I noticed that disk 6 will still not in the RAID, and in fact the controller was show the name of the old drive still. A week later I went back and replace the drive again, thinking I might have swapped in a bad drive. Still the same problem. I decided to reboot the machine, to see if that would "force" the controller into seeing the new drive. It did, and a rebuild started to happen (from disk 7). Eventually both drives were showing as good. A week later, the MySQL database has flagged the database is corrupt, and is unable to repair it. I don't know what has gone wrong, but I suspected this 6-7 pair. At this point I noticed that the RAID had constantly been verifying itself, over and over. Regardless of this I began to rebuild the database, which took about 19 hours. It's a big database. Near the end of the repair, the RAID controller told me it had dropped disk 7, and that some data was most likely corrupted. I contacted LSI tech support, and they very promptly started to help me. I mentioned that drive 7 had been dropped. They suspect that drive 7 was always at fault, and drive 6 had always been good. I want to know how often a RAID controller would drop the wrong drive (in this case dropping drive 6 a month ago, instead of 7). I foolishly didn't run smartctl on the drives before I started swapping them out. I just assumed the RAID controller knew what it was talking about. I think my plan of action is to replace drive 7, rebuild the array from scratch, double check smartctl on ALL the disks, and then start restoring my data again. I would appreciate anyone's input on what the correct procedure for swapping drives is, and how often failures like this happen. If anyone would like more information then I'd be happy to provide it. thanks in advance. Oh some more information. I'm running CentOS 5.3, with two RAID arrays, a simple RAID 1 for the OS, and RAID 10 for the database. Both arrays are on different controllers. The RAID 10 is made of 10 identical ST3640323AS drives, until I swapped in a SAMSUNG HD103SJ last month.

    Read the article

  • What is a good design pattern / lib for iOS 5 to synchronize with a web service?

    - by Junto
    We are developing an iOS application that needs to synchronize with a remote server using web services. The existing web services have an "operations" style rather than REST (implemented in WCF but exposing JSON HTTP endpoints). We are unsure of how to structure the web services to best fit with iOS and would love some advice. We are also interested in how to manage the synchronization process within iOS. Without going into detailed specifics, the application allows the user to estimate repair costs at a remote site. These costs are broken down by room and item. If the user has an internet connection this data can be sent back to the server. Multiple photographs can be taken of each item, but they will be held in a separate queue, which sends when the connection is optimal (ideally wifi). Our backend application controls the unique ids for each room and item. Thus, each time we send these costs to the server, the server echoes the central database ids back, thus, that they can be synchronized in the mobile app. I have simplified this a little, since the operations contract is actually much larger, but I just want to illustrate the basic requirements without complicating matters. Firstly, the web service architecture: We currently have two operations: GetCosts and UpdateCosts. My assumption is that if we used a strict REST architecture we would need to break our single web service operations into multiple smaller services. This would make the services much more chatty and we would also have to guarantee a delivery order from the app. For example, we need to make sure that containing rooms are added before the item. Although this seems much more RESTful, our perception is that these extra calls are expensive connections (security checks, database calls, etc). Does the type of web api (operation over service focus) determine chunky vs chatty? Since this is mobile (3G), are we better handling lots of smaller messages, or a few large ones? Secondly, the iOS side. What is the current advice on how to manage data synchronization within the iOS (5) app itself. We need multiple queues and we need to guarantee delivery order in each queue (and technically, ordering between queues). The server needs to control unique ids and other properties and echo them back to the application. The application then needs to update an internal database and when re-updating, make sure the correct ids are available in the update message (essentially multiple inserts and updates in one call). Our backend has a ton of business logic operating on these cost estimates. We don't want any of this in the app itself. Currently the iOS app sends the cost data, and then the server echoes that data back with populated ids (and other data). The existing cost data is deleted and the echoed response data is added to the client database on the device. This is causing us problems, because any photos might not have been sent, but the original entity tree has been removed and replaced. Obviously updating the costs tree rather than replacing it would remove this problem, but I'm not sure if there are any nice xcode libraries out there to do such things. I welcome any advice you might have.

    Read the article

  • Would this be a good web application architecture?

    - by Gustav Bertram
    My problem Our MVC based framework does not allow us to cache only part of our output. Ideally we want to cahce static and semi-static bits, and run dynamic bits. In addition, we need to consider data caching that reacts to database changes. My idea The concept I came up with was to represent a page as a tree of XML fragment objects. (I say XML, but I mean XHTML). Some of the fragments are dynamic, and can pull their data directly from models or other sources, but most of the fragments are static scaffolding. If a subtree of fragments is completely static, then I imagine that they could unfold into pure XML that would then be cached as the text representation of their parent element. This process would ideally continue until we are left with a root element that contains all of the static XML, and has a couple of dynamic XML fragments that are resolved and attached to the relevant nodes of the XML tree just before the page is displayed. In addition to separating content into dynamic and static fragments, some fragments could be dynamic and cached. A simple expiry time which propagates up through the XML fragment tree would indicate that a specific fragment should periodically be refreshed. A newspaper section or front page does not need to be updated each second. Minutes or sometimes even longer is sufficient. Other fragments would be dynamic and uncached. Typically too many articles are viewed for them to be cached - the cache would overflow. Some individual articles may be cached if they are extremely popular. Functional notes The folding mechanism could be to be smart enough to judge when it would be more profitable to fold a dynamic cached fragment and propagate the expiry date to the parent fragment, or to keep it separate and simple attach to the XML tree when resolving the page. If some dynamic cached fragments are associated to database objects through mechanisms like a globally unique content id, then changes to the database could trigger changes to the output cache. If fragments store the identifiers of parent fragments, then they could trigger a refolding process that would then include the updated data. A set of pure XML with an ordered array of fragment objects (that each store the identifying information of the node to which they should be attached), can be resolved in a fairly simple way by walking the XML tree, and merging the data from the fragments. Because it is not necessary to parse and construct the entire tree in memory before attaching nodes, processing should be fairly fast. The identifiers of each fragment would be a combination of relevant identity data and the type of fragment object. Cached parent fragments would contain references to these identifiers, in order to then either pull them from the fragment cache, or to run their code. The controller's responsibility is reduced to making changes to the database, and telling the root XML fragment object to render itself. The Question My question has two parts: Is this a good design? Are there any obvious flaws I'm missing? Has somebody else thought of this before? References? Is there an existing alternative that I should consider? A cool templating engine maybe?

    Read the article

  • Trying to make a universe [on hold]

    - by caters
    I am wanting to program a universe so that it starts with a big bang and atoms form and then molecules form and stars start to form and planets start to form and then moons around those planets. I have a few questions. If 400 IPMUs(In Program Mass Units) = 1 solar mass than how would I calculate the number of IPMUs for a spectral class of star given the range of solar masses for a main sequence star in that spectral class? How can I have planets not look like stars? Since whether it is a subdwarf, main sequence star, subgiant, giant, bright giant, supergiant, or hypergiant is mainly dependent on the radius and luminosity how can I have the radius and luminosity independent of the mass?

    Read the article

  • BeautyBay.com Boosts its Web business with Endeca!

    - by Richard Lefebvre
    BeautyBay.com Boosts Webpage Views by 70%, Increases Items Placed in Shopping Baskets, and Runs 160 Concurrent Brand and Product Promotion. BeautyBay.com Ltd is the United Kingdom’s largest independent online luxury beauty-product retailer. The company sells more than 10,000 products from leading brands like Urban Decay, Paul & Joe, Mario Badescu, bareMinerals, and Dr Sebagh. It strives to stock consumers’ favorite brands and serve as a leading source of beauty information and product reviews. The company won an Online Retail Award in 2013 in the Beauty, Perfume & Cosmetics category. Read the success story, featuring the role of Oracle Endeca here

    Read the article

  • Is it possible to have .bashrc outside home directory?

    - by FSchmidt
    I want to put a .bashrc file in a directory where my application is located, to set up path variables accordingly independent of the location of the directory at the moment. At the same time, I want to be able to run the application right away, without having to source a shell file to set the path every time. Therefore I figured I could use .bashrc which is executed when the non-login terminal is started. If I do put it in the proper .bashrc in the home directory, I would have to give an absolute path which I want to avoid. Is there a way to have something like .bashrc but not in home directory (ie a shell that is executed when terminal is started?)

    Read the article

  • Processing a list of atomic operations, allowing for interruptions

    - by JDB
    I'm looking for a design pattern that addresses the following situation: There exists a list of tasks that must be processed. Tasks may be added at any time. Each task is wholly independent from all other tasks. The order in which tasks are processed has no effect on the overall system or on the tasks themselves. Every task must be processed once and only once. The "main" process which launches the task processors may start and stop without warning. When stopped, the "main" process loses all in-memory data. Obviously this is going to involve some state, but are there any design patterns which discuss where and how to maintain that state? Are there any relevant anti-patterns? Named patterns are especially helpful so that we can discuss this topic with other organizations without having to describe the entire problem domain.

    Read the article

  • Is there a recommended order to take the Oracle Java EE certification exams?

    - by Karl
    I recently passed the Oracle Certified Professional, Java SE 6 Programmer examination. Now, my boss would like me to take "the next step" to broaden my competence. I tried to explain that there is no equivalent Java EE 6 Programmer examination, but a number of different exams, such as Web Services, Web Components, and Enterprise JavaBeans. Is there a recommended path to follow for the various Oracle certifications in the Enterprise Edition of Java? Is it logical to take some exams prior to taking others because the content builds upon previous knowledge or are they all independent?

    Read the article

  • How can I make my game more popular without paying money?

    - by Marlon Drescher
    I am a game designer, software developer, composer and graphic artist and made the 3D Hack 'n Slay MMO Forgotten Elements on my own. It's playable at open Beta and will be released at the end of the year. I used Plain Old JAVA, JPCT 3D Engine, Tomcat Webserver and Blender 3D / Gimp to manage all the tasks. I developed the whole game from scratch. For me the hardest task in this challenge is probably the whole thing about marketing and advertisement. Because it is a independent project and I am the only person working on it, there is no money I could invest for making advertisement. But anyhow... How could it be possible to make this game more popular? What would you suggest me?

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • How to create reproducible probability in map generation?

    - by nickbadal
    So for my game, I'm using perlin noise to generate regions of my map (water/land, forest/grass) but I'd also like to create some probability based generation too. For instance: if(nextInt(10) > 2 && tile.adjacentTo(Type.WATER)) tile.setType(Type.SAND); This works fine, and is even reproduceable (based on a common seed) if the nextInt() calls are always in the same order. The issue is that in my game, the world is generated on demand, based on the player's location. This means, that if I explore the map differently, and the chunks of the map are generated in a different order, the randomness is no longer consistent. How can I get this sort of randomness to be consistent, independent of call order? Thanks in advance :)

    Read the article

  • JAX Innovation Awards 2011

    - by Tori Wieldt
    The JAX Innovation Awards were presented tonight at the JAX Conference in San Jose, California, to reward those technologies, companies, organizations and individuals that make outstanding contributions to Java. The winners were:     •    Most Innovative Java Technology - JRebel    •    Most Innovative Java Company - Red Hat    •    Top Java Community Ambassador - Martin Odersky    •    Special Jury Award - Brian GoetzIn addition to being acknowledged best-in-class by peers from the Java community, winners received $2500 each. The JAXConf team took nominations from the community, had them reviewed by a panel of independent experts to create a shortlist, which was then voted on by the Java community."The java culture inspires innovation" said Sebastian Meyen, JAX Conference Chair, "and we are happy to reward that."  

    Read the article

  • How get and set accessors work

    - by Chris Halcrow
    The standard method of implementing get and set accessors in C# and VB.NET is to use a public property to set and retrieve the value of a corresponding private variable. Am I right in saying that this has no effect of different instances of a variable? By this I mean, if there are different instantiations of an object, then those instances and their properties are completely independent right? So I think my understanding is correct that setting a private variable is just a construct to be able to implement the get and set pattern? Never been 100% sure about this.

    Read the article

  • How should I handle a redirect to an identity provider during a web api data request

    - by Erds
    Scenario I have a single-page web app consisting purely of html, css, and javascript. After initial load and during use, it updates various views with data from one or more RESTful apis via ajax calls. The api calls return data in a json format. Each web api may be hosted on independent domains. Question During the ajax callout, if my authorization token is not deemed valid by the web api, the web api will redirect me (302) to the identity provider for that particular api. Since this is an ajax callout for data and not necessarily for display, i need to find a way to display the identity provider's authentication page. It seems that I should trap that redirect, and open up another view to display the identity provider's login page. Once the oauth series of redirects is complete, i need to grab the token and retrigger my ajax data call with the token attached. Is this a valid approach, and if so are there any examples showing the ajax handling of the redirects?

    Read the article

  • How do I create a subdomain for a site hosted by someone who does not allow it?

    - by user99572_is_fine
    I want to create a subdomain for a site hosted by Jimdo (a DIY website builder). Jimdo does not allow subdomains however. I am trying to find a workaround where a subdomain is hosted elsewhere but everything else remains as it is. E.g. I use their email service and I want to keep it. The domain is not hosted by Jimdo, but by a host that allows me to edit my zones. It points to the Jimdo NS. I have independent hosting where I have NS information. This is where I want to host my subdomain. My thinking was that I could use ZoneEdit as a "fork" that allows me to keep using my Jimdo page like before and, at the same time, directs a subdomain to another host. Provided this is possible: Question: How do I configure ZoneEdit CNAME or NS records to forward visitors to my website and my email to my Jimdo mail account while pointing a subdomain to another host?

    Read the article

< Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >