Search Results

Search found 22526 results on 902 pages for 'multiple databases'.

Page 365/902 | < Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >

  • Using WebSphere CloudBurst with PowerVM to AIX virtualization over a cloud

    - by ADD Geek
    hi there we are studying the virtualization option to reduce our datacenter cost, and this research was assigned to me. we looked into alternatives and we almost reached a conclusion that PowerVM is the only option to virtualize pSeries servers. we found no signs of cloud support explicitly mentioned in any document, however there was the mention of CloudBurst. from the videos we watched and the documents we read, it seems that CloudBurst is more oriented towards Application Servers (WebSphere Software). but our environment is not relying only on WebSphere. we have some banking applications, Oracle Databases and MQ/Broaker. the question is: 1- can we virtualize the existing applications (all running AIX) on a cloud running on top of some of the existing servers? (given that we do the sizing properly) 2- is PowerVM to run on top of CloudBurst? 3- if the above solution is applicable, is this some sort of HA solution (since the VM will run on top of multiple physical boxes, while the same physical box will run multiple live images) thanks for your help

    Read the article

  • Tiny linux box with 2xGbLAN, WLAN and 10MB/s AES throughput?

    - by Nakedible
    I'd like to find a small linux box with the following specifications: Small (mini-ITX size is OK) Fanless Runs Debian At least two gigabit network interfaces WLAN that supports "host ap" with hostapd + mac80211 in AP mode Can encrypt AES at least 10 megabytes per second Total cost $300 or less Solutions from multiple parts also accepted - I can buy an external network card etc. and build the box myself if the components are available. If you don't know about the "host ap" thing, just suggest your solution, I'll find out if I can get that resolved. If I can't get all that, I can possibly skip the "runs Debian" part, and I can definitely skip the hostapd part if the box can be a wireless access point with multiple ESSIDs out of the box. Something like Asus RT-N16 is close - doesn't run Debian easily, and probably doesn't encrypt AES fast enough. Something like Zotac ZBOX HD-ID11 is also close - no idea which WLAN card it has and it lacks second gigabit interface, but otherwise nice.

    Read the article

  • HP LaserJet Pro 400 Color M451dn Phantom Print Jobs

    - by francisswest
    Scenario: Multiple printers hooked up to a printer server (2008r2) including this HP LaserJet Pro 400 Color M451dn. All machines that are using the printer are based on Windows 7 Enterprise x64. Problem: Every couple of days the users who frequent this printer let me know that a few dozen pages with random characters down one side of the paper print out. This happens usually during the evening when no one is around to send print jobs to it. What I have done: Provided the below screen shot of the printer log with what I assume is the print jobs in question. I have looked into the printer driver compatibility and found no issues. Question: Is there a known issue with this printer or similar printers, and is there a solution that people are familiar with when they see multiple pages of gibberish printing out?

    Read the article

  • Which hosts have low latency across United States and Europe?

    - by Joost van Doorn
    I'm looking for some information on web hosts that have low latency (<100ms) to both the United States and Europe. The host can be in either the United States or Europe. Latency is most important to the United States, United Kingdom, the Netherlands, Sweden and Norway. Should be able to provide managed hosting. Hosting at multiple locations is not what I'm looking for. Answers should contain at least some latency information from multiple locations, preferably from Los Angeles, New York, London, Amsterdam and Oslo. Also some information on your experience with this host is preferred, do not rant, do provide details of your package (with or without SLA, dedicated or VPS etc.). From my own little research I found that probably New York based hosts can offer low latency to all these locations, but I do not have much statistics to back that up other than my own ping is about 85ms to New York from the Netherlands.

    Read the article

  • How to indicate reliability when reporting availability of competencies

    - by Jan Doggen
    We have employees with competencies: Pete Welder Carpenter Melissa Carpenter Assume they both work 40 hours/week, and have not yet been assigned work. We need to report the availability of these competencies, expressed in hours. As far as I can see now, we can report this in two ways: Method A. When someone has multiple competencies, count them both. Welder 40 hours Carpenter 80 hours Method B. When someone has multiple competencies, count an equal division of hours for each Welder 20 hours Carpenter 60 hours Method A has our preference: - A good planner will know to plan the least available competency first. If 30 hours of welding is planned, we will be left with 10 welder, 50 carpenter. - Method B has the disadvantage that the planner thinks he cannot plan the job when 30 hours of welding is required. However, if we report this we would like to give an estimate of the reliability of the numbers for each competency, i.e. how much are these over-reported? In my example A, would I say that carpenter is 100% over-reported, or 50%, or maybe another number? How would I calculate this for large numbers of competencies? I'm sure we are not the first ones dealing with this, is there a 'usual' way of doing this in planning? Additionally: - Would there be an even better method than A or B? - Optionally, we also have an preference order of competencies (like: use him/her in this order), Pete could be 1. welder 2. carpenter. Does this introduce new options?

    Read the article

  • Working with Reporting Services Filters – Part 2: The LIKE Operator

    - by smisner
    In the first post of this series, I introduced the use of filters within the report rather than in the query. I included a list of filter operators, and then focused on the use of the IN operator. As I mentioned in the previous post, the use of some of these operators is not obvious, so I'm going to spend some time explaining them as well as describing ways that you can use report filters in Reporting Services in this series of blog posts. Now let's look at the LIKE operator. If you write T-SQL queries, you've undoubtedly used the LIKE operator to produce a query using the % symbol as a wildcard for multiple characters like this: select * from DimProduct where EnglishProductName like '%Silver%' And you know that you can use the _ symbol as a wildcard for a single character like this: select * from DimProduct where EnglishProductName like '_L Mountain Frame - Black, 4_'   So when you encounter the LIKE operator in a Reporting Services filter, you probably expect it to work the same way. But it doesn't. You use the * symbol as a wildcard for multiple characters as shown here: Expression Data Type Operator Value [EnglishProductName] Text Like *Silver* Note that you don’t have to include quotes around the string that you use for comparison. Books Online has an example of using the % symbol as a wildcard for a single character, but I have not been able to successfully use this wildcard. If anyone has a working example, I’d love to see it!

    Read the article

  • OpenLDAP server logs filled with "TLS negotiation failure"

    - by WildVelociraptor
    I recently migrated an old OpenLDAP setup to a newer server, with a more robust certificate setup. Currently, most hosts are required to verify the cert matches the host: tls_checkpeer yes TLS_REQCERT always In the server logs, there are multiple occurences of: Nov 6 10:45:08 <servername> slapd[1773]: conn=2785646 fd=35 closed (TLS negotiation failure) These errors appear from multiple hosts, but there don't seem to be any issues actually logging into those servers with an LDAP account. Does anyone know what would cause these errors? The server is running Ubuntu 12.04.2, and OpenLDAP version 2.4.28. The cert was generated using GnuTLS.

    Read the article

  • reduce memory footprint of java virtual machine

    - by Lorenzo Boccaccia
    I've a citrix server where multiple users use a multiple java application. Is there a way to reduce the memory footprint of the jvm itself? The max heap is already set fairly low (64MB), as the permgen (32MB) space and we're to the point that the jvm itself uses way more memory than the application itself (the committed area is around 350MB) I'm looking for a way to reduce the jvm ram usage or to make the all the applications run within the same jvm or any other way of sharing common pages between running jvm (if possible) or try switch to switch to a jvm if a jvm exists having optimizations relative to this scenario currently using windows 2003 server and sun java virtual machine 1.6

    Read the article

  • Does RDNS for mail server have to match the mail server hostname exactly?

    - by threecheeseopera
    Typically when setting up a mail server, I create an rDNS record for the mail server IP to match the mail server hostname (ex: mail.example.com). Can I instead set the rDNS ptr to match the parent domain (e.g. example.com), if this server is being used for multiple purposes, and still send mail successfully (i.e. not be classified as spam b/c of mismatched rDNS)? Thanks! EDIT: The article at http://en.wikipedia.org/wiki/Forward_Confirmed_reverse_DNS seems to indicate that it might be more complicated than I had thought. For instance, 1) I did not know that you could have multiple PTR records for a given IP; 2) it appears that as long as each PTR record matches an A record, everything is good (basically nullifying my question). Would you agree?

    Read the article

  • SQL SERVER – How to Get SQL Server Restart Notification?

    - by Pinal Dave
    Few days back my friend called me to know if there is any tool which can be used to get restart notification about SQL in their environment. I told that SQL Server can do it by itself with some configurations. He was happy and surprised to know that he need not spend any extra money. In SQL Server, we can configure stored procedure(s) to run at start-up of SQL Server. This blog would give steps to achieve how to achieve it. There are many situations where this feature can be used. Below are few. Logging SQL Server startup timings Modify data in some table during startup (i.e. table in tempdb) Sending notification about SQL start. Step 1 – Enable ‘scan for startup procs’ This can be done either using T-SQL or User Interface of Management Studio. EXEC sys.sp_configure N'Show Advanced Options', N'1' GO RECONFIGURE WITH OVERRIDE GO EXEC sys.sp_configure N'scan for startup procs', N'1' GO RECONFIGURE WITH OVERRIDE GO Below is the interface to change the setting. We need to go to “Server” > “Properties” and use “Advanced” tab. “Scan for Startup Procs” is the parameter under “Miscellaneous” section as shown below. We need to make value as “True” and hit OK. Step 2 – Create stored procedure It’s important to note that the procedure is executed after recovery is finished for ALL databases. Here is a sample stored procedure. You can use your own logic in the procedure. CREATE PROCEDURE SQLStartupProc AS BEGIN CREATE TABLE ##ThisTableShouldAlwaysExists (AnyColumn INT) END Step 3 – Set Procedure to run at startup We need to use sp_procoption to mark the procedure to run at startup. Here is the code to let SQL know that this is startup proc. sp_procoption 'SQLStartupProc', 'startup', 'true' This can be used only for procedures in master database. Msg 15398, Level 11, State 1, Procedure sp_procoption, Line 89 Only objects in the master database owned by dbo can have the startup setting changed. We also need to remember that such procedure should not have any input/output parameter. Here is the error which would be raised. Msg 15399, Level 11, State 1, Procedure sp_procoption, Line 107 Could not change startup option because this option is restricted to objects that have no parameters. Verification Here is the query to find which procedures is marked as startup procedures. SELECT name FROM sys.objects WHERE OBJECTPROPERTY(OBJECT_ID, 'ExecIsStartup') = 1 Once this is done, I have restarted SQL instance and here is what we would see in SQL ERRORLOG Launched startup procedure 'SQLStartupProc'. This confirms that stored procedure is executed. You can also notice that this is done after all databases are recovered. Recovery is complete. This is an informational message only. No user action is required. After few days my friend again called me and asked – I want to turn this OFF? Use comments section and post the answer for him.  Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL

    Read the article

  • Is djvubundle available in Ubuntu?

    - by Tim
    The official webpage says Assembling DjVu Images into Multipage Documents The batch compressors distributed as part of the DjVuText and DjVuLayered packages can directly produce multipage DjVu file when fed with multiple input files. The files produced are smaller than if the pages are compressed separately because the compressor can extract and share redundant information accross multiple pages. Individually compressed DjVu pages can be assembled into multipage documents using the free package DjVuMulti. To assemble a bunch of DjVu images into a single BUNDLED document simply type: djvubundle page1.djvu page2.djvu.... pageN.djvu document.djvu To assemble a bunch of DjVu images into an INDIRECT document, type: djvujoin page1.djvu page2.djvu.... pageN.djvu documentdir/index.djvu where documentdir must be an existing directory where all the individual page files will be copied. To disassemble a BUNDLED document into an INDIRECT one, simply say: djvujoin document.djvu documentdir/indexfile.djvu To convert a multipage document from one of the old 2.0 multipage formats, do djvureindex olddocument newdocument The programs djvujoin, and djvubundle supersede the 2.0 programs djvuindex and djvumerge. I couldn't find djvujoin and djvubundle for Ubuntu. djvulibre doesn't have them either. Do I miss something? Thanks.

    Read the article

  • Can't play stream from TorrentFlux server

    - by thegreyspot
    I am trying to stream a video from my TorrentFlux-b4rt server. I tried multiple media players, none work. Only VLC was able to produce an error message: input can't be opened: VLC is unable to open the MRL 'mms://..*.*:8080/'. Check the log for details. I have tried multiple computers on different networks and all have the same issue. I am using Windows 7 to play the videos, and the server is Torrentflux-b4rt 1.0-beta2 with ubuntu 9.10.

    Read the article

  • How to share code as open source?

    - by Ethel Evans
    I have a little program that I wrote for a local group to handle a somewhat complicated scheduling issue for scheduling multiple meetings in multiple locations that change weekly according to certain criteria. It's a niche need, but I wouldn't be surprised if there are other groups that could use software like this. In fact, we've had requests from others for directions on starting a group like this, and if their groups get as big, they might also want special software to help with scheduling. I plan to continue developing the program and eventually make it an online web app, but a very simple alpha version is completed as a console app. I'd like to make it available as open source, but I have no idea what kind of process I should go through first. Right now, all I have is Java code, not even unit-tested thoroughly. I haven't shown the code to anyone else. There is no documentation. I don't know where I would put the code so others could access it. I don't know anything about licensing it. I don't know what kind of support people will expect from me if I release it as open source. I have no idea what else I should worry about. Can someone outline for me (or post an article(s) that outlines) the process of taking open source software from "coded" to "completed / available"? I really don't want to embarrass myself by doing things weirdly.

    Read the article

  • RESTFul: state changing actions

    - by Miro Svrtan
    I'am planning to build RESTfull API but there are some architectural questions that are creating some problems in my head. Adding backend bussiness logic to clients is option that I would like to avoid since updating multiple client platforms is hard to maintain in real time when bussiness logic can rapidly change. Lets say we have article as a resource ( api/article ), how should we implement actions like publish, unpublish,activate or deactivate and so on but to try to keep it as simple as possible? 1) Should we use api/article/{id}/{action} since a lot of backend logic can happen there like pushing to remote locations or change of multiple properties. Probably the hardest thing here is that we need to send all article data back to API for updating and multiuser work could not be implemented. For instance editor could send 5 seconds older data and overwrite fix that some other journalist just did 2 seconds ago and there is no way that I could explain to clients this since those publishing an article is really not in any way connected to updating the content. 2) Creating new resource can also be an option, api/article-{action}/id , but then returned resource would not be article-{action} but article which I'am not sure if this is proper. Also in server side code article class is handling actuall work on both resource and I'm not sure if this goes against RESTfull thinking Any suggestions are welcomed..

    Read the article

  • 32bit Application Memory Usage on 64bit Windows 7

    - by Brian
    I have an early 2012 Macbook Pro with and Intel I7 processor and 16 gigs RAM running Windows 7 Professional 64bit via Bootcamp. I work in Geographical Information Systems as a programmer so most of the applications I am running are 32bit Applications, but tend to use a lot of resources (i.e. ArcGIS, SQL Server Express, Visual Studio, etc.). I have been noticing that when I have multiple instances of either the same 32bit application or different 32bit applications and they are all working on hefty processing tasks, I am still only topping out at about 30% memory use. I understand 32bit applications are limited to less than 4gb RAM, but I assumed that one instance could use its own 4gb while another instance could use another 4gb to take full advantage of all the memory I have installed. Can anyone explain how this works and how I can get my applications to take advantage of all my memory via running multiple instances?

    Read the article

  • 10 gigabit or 1 gigabit switch

    - by Guntis
    We are planning to move mysql to dedicated box. At this moment we have web servers and mysql is running on each. Question is: cheaper is to buy 10G switch and put 10G network card into mysql server. Or buy normal gigabit switch and connect mysql box to switch with multiple network cables. In 1G scenario then we give each web server different mysql IP address. I don't think, that mysql box with one 1G link is enough to to satisfy multiple web box mysql traffic. At this moment we have 3 servers witch are running mysql/web. Plan is to add fourth server for mysql only. Thanks. Edit: if we buy 1G switch with mini-GBIC ports. Can we put in mini-GBIC 10G connectors and then connect mysql box to that port?

    Read the article

  • dual/multi-boot computers and software licensing

    - by Matt
    Suppose you have a computer with two or more operating systems, and a certain piece of software whose license terms allows it to be installed on one computer, and it does a daily check with a remote server to verify that your serial is only used on the original install computer. You install this software on each of your OSes, but since its a different OS the remote server would have to determine that it is not on the same computer, and so would disable your license. So my question, when a license refers to a single computer, does a situation like this usually count as a single computer, or do the multiple OSes sort of make it multiple computers? How do you think a software vendor (specifically thinking AV companies that do this sort of serial check) would handle this situation?

    Read the article

  • Azure Web Sites FTP credentials

    - by Bertrand Le Roy
    A quick tip for all you new enthusiastic users of the amazing new Azure. I struggled for a few minutes finding this, so I thought I’d share. The Azure dashboard doesn’t seem to give easy access to your FTP credentials, and they are not the login and password you use everywhere else. What Azure does give you though is a Publish Profile that you can download: This is a plain XML file that should look something like this: <publishData> <publishProfile profileName="nameofyoursite - Web Deploy" publishMethod="MSDeploy" publishUrl="waws-prod-blu-001.publish.azurewebsites.windows.net:443" msdeploySite="nameofyoursite" userName="$NameOfYourSite" userPWD="sOmeCrYPTicL00kIngStr1nG" destinationAppUrl="http://nameofyoursite.azurewebsites.net" SQLServerDBConnectionString="" mySQLDBConnectionString="" hostingProviderForumLink="" controlPanelLink="http://windows.azure.com"> <databases/> </publishProfile> <publishProfile profileName="nameofyoursite - FTP" publishMethod="FTP" publishUrl="ftp://waws-prod-blu-001.ftp.azurewebsites.windows.net/site/wwwroot" ftpPassiveMode="True" userName="nameofyoursite\$nameofyoursite" userPWD="sOmeCrYPTicL00kIngStr1nG" destinationAppUrl="http://nameofyoursite.azurewebsites.net" SQLServerDBConnectionString="" mySQLDBConnectionString="" hostingProviderForumLink="" controlPanelLink="http://windows.azure.com"> <databases/> </publishProfile> </publishData> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } I’ve highlighted the FTP server name, user name and password. This is what you need to use in Filezilla or whatever you use to access your site remotely. Notice how the password looks encrypted. Well, it’s not really encrypted in fact. This is your password in clear text. It’s just crypto-random gibberish, which is the best kind of password. UPDATE: About 2 minutes after I posted that, David Ebbo mentioned to me on Twitter that if you've configured publishing credentials (for Git typically) those will work too. Don't forget to include the full user name though, which should be of the form nameofthesite\username. The password is the one you defined. That’s it. Enjoy.

    Read the article

  • Best PerfCounters for monitoring system health of IIS, WCF, WWF and .Net for a Workflow based soluti

    - by Gineer
    We have a solution built in .Net that will be installed into a client environment. The solution will span multiple servers and be running on multiple tiers. The client makes us of MOM (Microsoft operations Manager) to monitor the system. What are the best counters to use for monitoring the overall health of the system? Are there any built in counters that we could add into a MOM Pack (as an Alert) to test a given scenario? Any thoughts suggestions would be much apreciated. Thanks

    Read the article

  • Big Data – ClustrixDB – Extreme Scale SQL Database with Real-time Analytics, Releases Software Download – NewSQL

    - by Pinal Dave
    There are so many things to learn and there is so little time we all have. As we have little time we need to be selective to learn whatever we learn. I believe I know quite a lot of things in SQL but I still do not know what is around SQL. I have started to learn about NewSQL recently. If you wonder what is NewSQL I encourage all of you to read my blog post about NewSQL over here Big Data – Buzz Words: What is NewSQL – Day 10 of 21. NewSQL databases are quickly becoming popular – providing the scale of NoSQL with the SQL features and transactions. As a part of learning NewSQL database, I have recently started to learn about ClustrixDB. ClustrixDB has been the most mature NewSQL database used by some of the largest internet sites in the world for over 3 years, with extensive SQL support. In addition to scale, it provides fast real-time analytics by bringing massively parallel processing (MPP), available only in warehousing databases, to the transactional database. The reason I am more intrigued about learning ClustrixDB is their recent announcement on Oct 31. ClustrixDB was only available as an appliance, but now with their software release on Oct 31, everyone can use it. It is now available as forever free for up to 12 cores with community support, and there is a 45 day trial for unlimited cluster sizes. With the forever free world, I am indeed interested in ClustrixDB now. I know that few of the leading eCommerce sites in the world uses them for their transactional database. Here are few of the details I have quickly noted for ClustrixDB. ClustrixDB allows user to: Scale by simply adding nodes to the cluster with a single command Run billions of transactions a day Run fast real-time analytics Achieve high-availability with recovery from node failure Manages itself Easily migrate from MySQL as it is nearly plug-and-play compatible, use MySQL drivers, tools and replication. While I was going through the documentation I realized that ClustrixDB also has extensive support for SQL features including complex queries involving joins on a dozen or more tables, aggregates, sorts, sub-queries. It also supports stored procedures, triggers, foreign keys, partitioned and temporary tables, and fully online schema changes. It is indeed a very matured product and SQL solution. Indeed Clusterix sound very promising solution, I decided to dig a bit deeper to understand who are current customers of the Clustrix as they exist in the industry for quite a few years. Their client list is indeed very interesting and here is my quick research about them. Twoo.com – Europe’s largest social discovery (dating) site runs 4.4 Billion Transactions a day with table sizes over a Terabyte, on a 168 core cluster. EngageBDR – Top 3 in the online advertising category uses ClustrixDB to serve 6.9 billion ads a day through real-time bidding platform. Their reports went from 4 hours to 15 seconds. NoMoreRack – Top 2 fastest growing e-commerce company in US used ClustrixDB for high availability and fast growth through Amazon cloud. MakeMyTrip – India’s leading travel site runs on ClustrixDB with two clusters running as multi-master in Chennai and Bangalore. Many enterprises such as AOL, CSC, Rakuten, Symantec use ClustrixDB when their applications need scale. I must accept that I am impressed with the information I have learned so far and now is the time to do some hand’s on experience with their product. I want to learn this technology so in future when it is about NewSQL, I know what I am talking about. Read more why Clustrix explains why you ClustrixDB might be the right database for you. Download ClustrixDB with me today and install it on your machine so in future when we discuss the technical aspects of it, we all are on the same page. The software can be downloaded here. Reference : Pinal Dave (http://blog.SQLAuthority.com)Filed under: Big Data, MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Clustrix

    Read the article

  • IE does not send NTLM domain

    - by Buddy Casino
    I have a problem with NTLM single-sign-on with IE8. We've got multiple domain controllers and users from multiple domains that we try to authenticate to a web application via NTLMv1 passthru. Somehow IE fails to send the user's domain in the NTLM Type 1 message. This has the effect that the webapp can not match users properly to their domain controllers, resulting in failed logon attempts, because a user from domain X tries to authenticate to domain controller Y. This problem does not occur with Firefox, as it always sends the correct domain header. So: how do I get IE to send the domain in the NTLM header?

    Read the article

  • Limitations of the SharePoint join using CAML

    - by ybbest
    Limitation One In SharePoint 2010, you can join the primary list to a foreign list and include more than one field from the foreign list. However, the limitation is that the included fields from foreign list have to be the following type: Calculated (treated as plain text) ContentTypeId Counter Currency DateTime Guid Integer Note (one-line only) Number Text The above limitation also explains why you cannot include some types of the fields from the remote list when creating a lookup. Limitation Two When using CAML query to join SharePoint lists, there can be joins to multiple lists, multiple joins to the same list, and chains of joins. However, the limitations are only inner and left outer joins are permitted and the field in the primary list must be a Lookup type field that looks up to the field in the foreign list. Limitation Three The support for writing the JOIN query in CAML is very limited.I have to hand-code the CAML query to join the lists,not fun at all.Although some blogs post mentioned about using LINQ to SharePoint and get the CAML code from there , but I never get it to work.You can check this blog post  for this.Let me know if it works for you. References: http://msdn.microsoft.com/en-us/library/ee535502.aspx http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spquery.joins.aspx

    Read the article

  • Multi-Resolution Mobile Development

    - by user2186302
    I'm about to start development on my first game for mobile phone (I already have a flash prototype completed so it's jsut a matter of "porting" it to mobile and fixing up the code) and plan on hopefully being able to get the game working on iphones and most android devices. I am using Haxe along with OpenFL and HaxeFlixel for development. My question is: What resolution should I design the game in initially and/or what is the best way to develop a game for multiple resolutions. I have found multiple different methods, the best, in my opinion, being strategy 3 on this page: http://wiki.starling-framework.org/manual/multi-resolution_development. However I have some questions about this. First, what would the best base resolution to use be, the guide suggests 240*320 which seems alright to me, although if I chose to use pixel graphics as I most probably will given I'm using HaxeFlixel, I'm not sure if they'll look too blocky on larger screens which I'm not even sure is a problem as it might still look alright. (Honestly, not sure about that and if anyone has any examples of games that use this method and look nice). Finally, please feel free to share whatever methods you use and think is best. For example, HaxeFlixel has a scaling feature that scales the game to fit the exact screen size, but I'm afraid that would lead to blurry and improperly scaled graphics since it would scale by non-integers. But, I'm not sure how noticeable a problem that may or may not be. Although from experience I'm pretty sure it won't look nice and currently I do not think I'm going to go for this option. So, I would really appreciate any help on this subject. Thank you in advance.

    Read the article

  • Information Spilling Across Object Boundaries

    - by Winston Ewert
    Many times my business objects tend to have situations where information needs to cross object boundaries too often. When doing OO, we want information to be in one object and as much as possible all code dealing with that information should be in that object. However, business rules do not follow this principle giving me trouble. As an example suppose that we have an Order which has a number of OrderItems which refers to an InventoryItem which has a price. I invoke Order.GetTotal() which sums the result of OrderItem.GetPrice() which multiples a quantity by InventoryItem.GetPrice(). So far so good. But then we find out that some items are sold with a two for one deal. We can handle this by having OrderItem.GetPrice() do something like InventoryItem.GetPrice( quantity ) and letting InventoryItem deal with this. However, then we find out that the two-for-one deal only lasts for a particular time period. This time period needs to be based on the date of the order. Now we change OrderItem.GetPrice() to be InventoryItem.GetPrice( quatity, order.GetDate() ) But then we need to support different prices depending on how long the customer has been in the system: InventoryItem.GetPrice( quantity, order.GetDate(), order.GetCustomer() ) But then it turns out that the two-for-one deals apply not just to buying multiple of the same inventory item but multiple for any item in a InventoryCategory. At this point we throw up our hands and just give the InventoryItem the order item and allow it to travel over the object reference graph via accessors to get the information its needs: InventoryItem.GetPrice( this ) TL;DR I want to have coupling in objects, but business rules often force me to access information from all over the place in order to make particular decisions. Are there good techniques for dealing with this? Do others find the same problem?

    Read the article

< Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >