Search Results

Search found 33406 results on 1337 pages for 'client library'.

Page 667/1337 | < Previous Page | 663 664 665 666 667 668 669 670 671 672 673 674  | Next Page >

  • Pre-Populate the AppFabric Cache

    When I start talking to people about the caching functionality that is part of Windows Server AppFabric I am usually asked "What is the AppFabric Cache?"  The MSDN page at http://msdn.microsoft.com/en-us/library/ee790954.aspx provides a great overview (below) as well as additional information.  The Cache is defined as: "Windows Server AppFabric caching features use a cluster of servers that communicate with each other to form a single, unified application cache system. As a distributed...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Is extensive documentation a code smell?

    - by Griffin
    Every library, open-source project, and SDK/API I've ever come across has come packaged with a (usually large) documentation file, and this seems contradictory to the wide-spread belief that good code needs little to no comments. What separates documentation from this programming methodology? a one to two page overview of a package seems reasonable, but elegant code combined with standard intelisense should have theoretically deprecated the practice of documentation by now IMO. I feel like companies only create detailed documentation and tutorials because its what they've always done. Why should developers have to constantly be searching through online documentation in order to learn how to do things when such information should be intrinsic to the classes, methods and namespaces?

    Read the article

  • [Not in Vermont] IT Jobs: Sharepoint/ASP.NET Dev + Winforms C# Dev in Western Mass

    Two .NET jobs in Western Mass from a recruiter, contact info below Requirement #1: Our client is looking for the best engineers in the world, and then we give them the opportunity to excel. Our light, scrum-based process keeps you focused on delivering functionality that our customers need. We try to do things right (unit tests, continuous builds, bug tracking, etc) and were looking for others who work this way too. Primary Responsibilities Develop SharePoint applications in ASP.NET with a heavy...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to troubleshoot Ubuntu One in Maverick beta?

    - by greggory.hz
    I just updated to the Maverick beta (actually it's a fresh install, and not an upgrade of 10.04). But now "Ubuntu One" doesn't show up in my me-menu thing. In addition, when I go to Ubuntu One from System-Preferences and attempt to log in from there it will sometimes look like it works, but other times it won't do anything. When I go to the command line and type u1sdtool -s it will either say something like "doing auth dance" or "auth failed" and often it has never even prompted me to log in or try to get a new password (even though I know it's correct). Anyway, this is the main hangup I have with the Maverick beta. I can't get to my ubuntu one account from the native client. Is this a widespread issue? Is there something I can do to fix this?

    Read the article

  • Basic WCF Unit Testing

    - by Brian
    Coming from someone who loves the KISS method, I was surprised to find that I was making something entirely too complicated. I know, shocker right? Now I'm no unit testing ninja, and not really a WCF ninja either, but had a desire to test service calls without a) going to a database, or b) making sure that the entire WCF infrastructure was tip top. Who does? It's not the environment I want to test, just the logic I’ve written to ensure there aren't any side effects. So, for the K.I.S.S. method: Assuming that you're using a WCF service library (you are using service libraries correct?), it's really as easy as referencing the service library, then building out some stubs for bunking up data. The service contract We’ll use a very basic service contract, just for getting and updating an entity. I’ve used the default “CompositeType” that is in the template, handy only for examples like this. I’ve added an Id property and overridden ToString and Equals. [ServiceContract] public interface IMyService { [OperationContract] CompositeType GetCompositeType(int id); [OperationContract] CompositeType SaveCompositeType(CompositeType item); [OperationContract] CompositeTypeCollection GetAllCompositeTypes(); } The implementation When I implement the service, I want to be able to send known data into it so I don’t have to fuss around with database access or the like. To do this, I first have to create an interface for my data access: public interface IMyServiceDataManager { CompositeType GetCompositeType(int id); CompositeType SaveCompositeType(CompositeType item); CompositeTypeCollection GetAllCompositeTypes(); } For the purposes of this we can ignore our implementation of the IMyServiceDataManager interface inside of the service. Pretend it uses LINQ to Entities to map its data, or maybe it goes old school and uses EntLib to talk to SQL. Maybe it talks to a tape spool on a mainframe on the third floor. It really doesn’t matter. That’s the point. So here’s what our service looks like in its most basic form: public CompositeType GetCompositeType(int id) { //sanity checks if (id == 0) throw new ArgumentException("id cannot be zero."); return _dataManager.GetCompositeType(id); } public CompositeType SaveCompositeType(CompositeType item) { return _dataManager.SaveCompositeType(item); } public CompositeTypeCollection GetAllCompositeTypes() { return _dataManager.GetAllCompositeTypes(); } But what about the datamanager? The constructor takes care of that. I don’t want to expose any testing ability in release (or the ability for someone to swap out my datamanager) so this is what we get: IMyServiceDataManager _dataManager; public MyService() { _dataManager = new MyServiceDataManager(); } #if DEBUG public MyService(IMyServiceDataManager dataManager) { _dataManager = dataManager; } #endif The Stub Now it’s time for the rubber to meet the road… Like most guys that ever talk about unit testing here’s a sample that is painting in *very* broad strokes. The important part however is that within the test project, I’ve created a bunk (unit testing purists would say stub I believe) object that implements my IMyServiceDataManager so that I can deal with known data. Here it is: internal class FakeMyServiceDataManager : IMyServiceDataManager { internal FakeMyServiceDataManager() { Collection = new CompositeTypeCollection(); Collection.AddRange(new CompositeTypeCollection { new CompositeType { Id = 1, BoolValue = true, StringValue = "foo 1", }, new CompositeType { Id = 2, BoolValue = false, StringValue = "foo 2", }, new CompositeType { Id = 3, BoolValue = true, StringValue = "foo 3", }, }); } CompositeTypeCollection Collection { get; set; } #region IMyServiceDataManager Members public CompositeType GetCompositeType(int id) { if (id <= 0) return null; return Collection.SingleOrDefault(m => m.Id == id); } public CompositeType SaveCompositeType(CompositeType item) { var existing = Collection.SingleOrDefault(m => m.Id == item.Id); if (null != existing) { Collection.Remove(existing); } if (item.Id == 0) { item.Id = Collection.Count > 0 ? Collection.Max(m => m.Id) + 1 : 1; } Collection.Add(item); return item; } public CompositeTypeCollection GetAllCompositeTypes() { return Collection; } #endregion } So it’s tough to see in this example why any of this is necessary, but in a real world application you would/should/could be applying much more logic within your service implementation. This all serves to ensure that between refactorings etc, that it doesn’t send sparking cogs all about or let the blue smoke out. Here’s a simple test that brings it all home, remember, broad strokes: [TestMethod] public void MyService_GetCompositeType_ExpectedValues() { FakeMyServiceDataManager fake = new FakeMyServiceDataManager(); MyService service = new MyService(fake); CompositeType expected = fake.GetCompositeType(1); CompositeType actual = service.GetCompositeType(2); Assert.AreEqual<CompositeType>(expected, actual, "Objects are not equal. Expected: {0}; Actual: {1};", expected, actual); } Summary That’s really all there is to it. You could use software x or framework y to do the exact same thing, but in my case I just didn’t really feel like it. This speaks volumes to my not yet ninja unit testing prowess.

    Read the article

  • Maximum file size for iFrame in IE7

    - by Peter Turner
    I've got a "super secure" javascript downloader* that I wrote, and it usually works alright. But I noticed, while trying to download a 90 meg file with it on a client's machine that on IE7, it's getting hung up about 1/3rd of the way through. I've never tried to send a file that large through the iFrame and it works fine in other browsers. Is there a size restriction on files that IE7 can read in an iFrame? * It's really just a PHP line that sets header("location: http://someplace/downloadbigthing.exe"); after it does some logging and verification.

    Read the article

  • Java Alphabetize Algorithm Insertion sort vs Bubble Sort

    - by Chris Okyen
    I am supposed to "Develop a program that alphabetizes three strings. The program should allow the user to enter the three strings, and then display the strings in alphabetical order." It's instructed that I need to use the String library compareTo()/charAt()/toLowerCase() to make all the characters lowercase so the Lexicon comparison is also a alphabetical comparison. Input Pseudo Code: String input[3]; Scanner keyboard = new Scanner(System.in); System.out.println("Enter three strings: "); for(byte i = 0; i < 3; i++) input[i] = keyboard.next() The sorting would be Insertion Sort: 321 2 3 1 2 31 231 1 23 1 2 3 1 23 1 23 123 Bubble Sort 321 231 213 123 Which would be more efficient in this case? The bubble sort seems to be more efficient though they seem to have equal stats for worst best and avg case, but I read the Insertion Sort is quicker for small amounts of data like my case.

    Read the article

  • 2D Platformer Collision Handling

    - by defender-zone
    Hello, everyone! I am trying to create a 2D platformer (Mario-type) game and I am some having some issues with handling collisions properly. I am writing this game in C++, using SDL for input, image loading, font loading, etcetera. I am also using OpenGL via the FreeGLUT library in conjunction with SDL to display graphics. My method of collision detection is AABB (Axis-Aligned Bounding Box), which is really all I need to start with. What I need is an easy way to both detect which side the collision occurred on and handle the collisions properly. So, basically, if the player collides with the top of the platform, reposition him to the top; if there is a collision to the sides, reposition the player back to the side of the object; if there is a collision to the bottom, reposition the player under the platform. I have tried many different ways of doing this, such as trying to find the penetration depth and repositioning the player backwards by the penetration depth. Sadly, nothing I've tried seems to work correctly. Player movement ends up being very glitchy and repositions the player when I don't want it to. Part of the reason is probably because I feel like this is something so simple but I'm over-thinking it. If anyone thinks they can help, please take a look at the code below and help me try to improve on this if you can. I would like to refrain from using a library to handle this (as I want to learn on my own) or the something like the SAT (Separating Axis Theorem) if at all possible. Thank you in advance for your help! void world1Level1CollisionDetection() { for(int i; i < blocks; i++) { if (de2dCheckCollision(ball,block[i],0.0f,0.0f)==true) { int up = 0; int left = 0; int right = 0; int down = 0; if(ball.coords[0] < block[i].coords[0] && block[i].coords[0] < ball.coords[2] && ball.coords[2] < block[i].coords[2]) { left = 1; } if(block[i].coords[0] < ball.coords[0] && ball.coords[0] < block[i].coords[2] && block[i].coords[2] < ball.coords[2]) { right = 1; } if(ball.coords[1] < block[i].coords[1] && block[i].coords[1] < ball.coords[3] && ball.coords[3] < block[i].coords[3]) { up = 1; } if(block[i].coords[1] < ball.coords[1] && ball.coords[1] < block[i].coords[3] && block[i].coords[3] < ball.coords[3]) { down = 1; } cout << left << ", " << right << ", " << up << ", " << down << ", " << endl; if (left == 1) { ball.coords[0] = block[i].coords[0] - 16.0f; ball.coords[2] = block[i].coords[0] - 0.0f; } if (right == 1) { ball.coords[0] = block[i].coords[2] + 0.0f; ball.coords[2] = block[i].coords[2] + 16.0f; } if (down == 1) { ball.coords[1] = block[i].coords[3] + 0.0f; ball.coords[3] = block[i].coords[3] + 16.0f; } if (up == 1) { ball.yspeed = 0.0f; ball.gravity = 0.0f; ball.coords[1] = block[i].coords[1] - 16.0f; ball.coords[3] = block[i].coords[1] - 0.0f; } } if (de2dCheckCollision(ball,block[i],0.0f,0.0f)==false) { ball.gravity = -0.5f; } } } To explain what some of this code means: The blocks variable is basically an integer that is storing the amount of blocks, or platforms. I am checking all of the blocks using a for loop, and the number that the loop is currently on is represented by integer i. The coordinate system might seem a little weird, so that's worth explaining. coords[0] represents the x position (left) of the object (where it starts on the x axis). coords[1] represents the y position (top) of the object (where it starts on the y axis). coords[2] represents the width of the object plus coords[0] (right). coords[3] represents the height of the object plus coords[1] (bottom). de2dCheckCollision performs an AABB collision detection. Up is negative y and down is positive y, as it is in most games. Hopefully I have provided enough information for someone to help me successfully. If there is something I left out that might be crucial, let me know and I'll provide the necessary information. Finally, for anyone who can help, providing code would be very helpful and much appreciated. Thank you again for your help!

    Read the article

  • How to get working cups command line tools on Server 14.04

    - by Nick
    It looks like some of the commands like lpr and lprm have broken versions that don't work with cups. These commands worked properly on 10.04. lpr for cups has an -o option, but no lpr is intalled when cups is installed, and the lpr installed with apt-get install lpr does not have the -o option and does not appear to be the cups version of lpr. man lpr shows BSD General Commands Manual at the top, where man lpr on the Ubuntu 10.04 server said Apple, inc in the same spot. which leads me to believe the "wrong" lpr is in the "lpr" package or package names got moved around. There is also a lprng package, but trying to install it wants to remove cups and cups-client. lprm also returns lprm: PrinterName: unknown printer when PrinterName is in fact a valid printer installed with cups and does appear in lpstat -t. How do I get the proper cups versions of lpr working on Server 14.04?

    Read the article

  • Audio Panning using RtAudio

    - by user1801724
    I use the RtAudio library. I would like to implement an audio program where I can control the panning (e.g. shifting the sound from the left channel to the right channel). In my specific case, I use RtAudio in duplex mode (you can find an example here: duplex mode). It means that I link the microphone input to the speaker output. I have searched on the web, but I did not find anything useful. Should I apply a filter on the output buffer? What kind of filter?

    Read the article

  • Audio Panning using RtAudio

    - by user1801724
    I use Rtaudio library. I would like to implement an audio program where I can control the panning (e.g. shifting the sound from the left channel to the right channel). In my specific case, I use a duplex mode (you can find an example here: duplex mode). It means that I link the microphone input to the speaker output. I seek on the web, but I did not find anything useful. Should I apply a filter on the output buffer? What kind of filter? Can anyone help me? Thanks

    Read the article

  • RESTful Java on Steroids (Parleys, Podcast, ...)

    - by alexismp
    As reported previously here, the JAX-RS 2.0 (JSR 339) expert group is making good progress. If you're interested in what the future holds for RESTful Java web services, you can now watch Marek's Devoxx presentation or listen to him in the latest Java Spotlight Podcast (#74). Marek discusses the new client API, filters/handlers, BeanValidation integration, Hypermedia support (HATEOAS), server-side async processing and more. With JSR 339's Early Draft Review 2 currently out, another draft review is planned for April, the public review should be available in June while the final draft is currently scheduled for the end of the summer. In short, expect completion sometime before the end of 2012.

    Read the article

  • Not-So-Well-Known features of Ubuntu

    - by Khaja Minhajuddin
    Ubuntu always surprises me by making things easier in unexpected ways. But, it isn't always obvious, For instance When I was looking for a tool to RDP into windows, I found that Ubuntu already has an pre-installed app called Terminal Server Client which has this functionality. I am sure there are many such hidden things which are not known to the average user. This thread could be used to expose these little gems. EDIT: Maybe the choice of my words was wrong, Are there any things which you found were not very obvious?

    Read the article

  • C++ : Lack of Standardization at the Binary Level

    - by Nawaz
    Why ISO/ANSI didn't standardize C++ at the binary level? There are many portability issues with C++, which is only because of lack of it's standardization at the binary level. Don Box writes, (quoting from his book Essential COM, chapter COM As A Better C++) C++ and Portability Once the decision is made to distribute a C++ class as a DLL, one is faced with one of the fundamental weaknesses of C++, that is, lack of standardization at the binary level. Although the ISO/ANSI C++ Draft Working Paper attempts to codify which programs will compile and what the semantic effects of running them will be, it makes no attempt to standardize the binary runtime model of C++. The first time this problem will become evident is when a client tries to link against the FastString DLL's import library from a C++ developement environment other than the one used to build the FastString DLL. Are there more benefits Or loss of this lack of binary standardization?

    Read the article

  • I'm getting unrelated system messages in terminal?

    - by Zed
    For some reason from time to time I keep getting this weird system messages in my working terminal emulator, unrelated to anything I do.For example: [000:000] Browser XEmbed support present: 1 [000:000] Browser toolkit is Gtk2. [000:001] Using Gtk2 toolkit [000:033] Starting client channel. [000:048] Read port file, port=33359 [000:050] Initiated connection to GoogleTalkPlugin [000:154] Socket connection established [000:154] ScheduleOnlineCheck: Online check in 5000ms [000:203] Got cookie response, socket is authorized [000:203] AUTHORIZED; socket handshake complete [005:216] HandleOnlineCheck: Starting check [005:216] HandleOnlineCheck: OK; current state: 3 Failed to open VDPAU backend libvdpau_nvidia.so: cannot open shared object file: No such file or directory After some investigation I concluded that those messages ARE from firefox.However, I didn't start Firefox from terminal. or nsBuiltinDecoderStateMachine::RunStateMachine queuing nsBuiltinDecoder::PlaybackEnded nsBuiltinDecoder::PlaybackEnded mPlayState=3 nsBuiltinDecoderStateMachine::RunStateMachine queuing nsBuiltinDecoder::PlaybackEnded nsBuiltinDecoder::PlaybackEnded mPlayState=3 I have no clue how this ends up in working terminal, any thoughts ?

    Read the article

  • Do Not Uninstall Flag on Apt?

    - by Daniel C. Sobral
    Does the Debian/Ubuntu package infrastructure has some way of marking packages so that they never get uninstalled, no matter the pinning of other packages? My problem is that, sometimes, packages installed by Puppet (coming from non-standard repositories, of course) cause other packages to get uninstalled -- in particular, openssh-{server,client}. The way this happens is that package A and B depend on different versions of package C. If A is installed and one asks to install B, then the version of C changes. The new version of C is incompatible with A, so A gets uninstalled. The funny thing is that the process is then reversed, as, on the next run, Puppet notices that A is not installed and tries to install it. So, basically, I want to make sure A never gets uninstalled, which would prevent B from getting installed. That would be reported as an error, making me aware of the issue. If anyone cares, Puppet uses the following command to install packages: /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install <package>

    Read the article

  • Video monetising and simple shop, platform?

    - by fieldman
    I have this client that wants a single-product website. The product is a training-video that they want to deliver virtually and optionally physically. I usually do all the front-end design and back-end development but the budget is close to $0 to start with. So I'm looking for a platform like shopify or something where a shop/cart can be set up quickly and simply with minimal up-front cost - but which can accomodate some kind of paywall (DRM too?) for the online video with an option to purchase for an aditional cost the physical DVD. Am I approaching the wrong way all together? Or do you know of any platform that will accomodate the specs?

    Read the article

  • Are there any alternative JS ports of Box2D?

    - by Petteri Hietavirta
    I have been thinking about creating a top down 2D car game for HTML5. For my first game I wrote the physics and collisions my self but for this one I would like to use some ready made library. I found out Box2D and its JS port. http://box2d-js.sourceforge.net It seems to be quite old port, made in 2008. Is it lacking many features of current Box2D or does it have major issues with it? And are there any alternatives for it?

    Read the article

  • Installing old Loki games on 12.04 64-bit results in no audio

    - by FlabbergastedPickle
    All, Here's an interesting problem. I followed instructions provided online for installing Loki Games' Heroes of Might and Magic 3 (see http://www.swanson.ukfsn.org/loki/ and http://wtanaka.com/node/7641) and got it installed and patched to the latest version. However, every time I start it regardless whether the pulseaudio is running, I get the following error: LD_LIBRARY_PATH=/usr/local/lib/Loki_Compat/ /usr/local/lib/Loki_Compat/ld-linux.so.2 /usr/local/games/Heroes3/heroes3.dynamic ALSA lib conf.c:3314:(snd_config_hooks_call) Cannot open shared library libasound_module_conf_pulse.so ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM default Couldn't open audio: My first soundcard is HDMI output and my second one is the actual soundcard (HP DM1 running 12.04 64-bit with latest updates). I did set up /etc/asound.conf as follows: asound.conf pcm.!default { type hw card 1 } ctl.!default { type hw card 1 } So, the default soundcard should work ok. Between Shadowgrounds that also stopped working and this it appears a there may be some unfinished business/regressions in 32-bit support on 64-bit systems in 12.04. Any thoughts?

    Read the article

  • UPK Pre-built Content Now Available for Additional Product Lines

    - by Karen Rihs
    UPK pre-built content development efforts are always underway and growing, and now include the recent release of new UPK pre-built content modules for three additional product lines! Oracle Utilities for Meter Data Management 2.0.1 Administrative Setup User Tasks VEE and Usage Rules Working with Measurement Data Fusion 11g Release 1 Functional Setup Manager General Ledger Global Human Resources Project Portfolio Management Self Service Procurement Oracle Crystal Ball 11.1.2 Oracle Crystal Ball For the most recent list of modules currently available for each product line, visit the UPK Resource Library on Oracle.com. For more information on how your organization can take advantage of UPK pre-built content, see our previous blog, The Value of UPK Pre-Built Content.  - Karen Rihs, UPK Outbound Product Management

    Read the article

  • Web interface with FastCGI or with direct HTTP?

    - by Basile Starynkevitch
    Let's assume I want (for fun at start) to play with some new DSL (domain specific language) idea. And I really want its user[s] (probably only me at first) to interact thru a web interface. I'll probably implement it in C++ (probably using LLVM). Should I use an HTTP server library (like libonion or microhttpd) to talk directly HTTP or should I use FastCGI? In particular, I am noticing that several recent web frameworks (Opa, Ocsigen, ...) do not have any FastCGI interface but only HTTP one.... So my feeling is that FastCGI is really out of fashion.... Any opinions on that? Do you know recently started project using FastCGI ? (and what about SCGI?)

    Read the article

  • Debian Stable vs Ubuntu LTS for Server?

    - by Kevin
    Quick question: Which is a better platform for a professional use server? Debian Stable or Ubuntu LTS? The third party software we plan to use, works on both. Which one is better on it own merits? Take into account things like the kernel (Ubuntu for example has its own custom kernel for servers), and other Ubuntu specific customizations. I keep switching back and forth, and I need to decide so I can recommend one or the other to a client. Right now, I think I am going to choose Debian Stable. Recently, I have had Ubuntu Server Edition 10.04.1 have a few strange issues... I have Ubuntu setup to do automatic updates via a simple script, and every few months or so, libapache2-mod-php5 gets removed because of conflicting packages... Thereby causing me to loose the php function of the web server. Debian Stable has not done anything like this.

    Read the article

  • Visual Studio 2010 Setup Projects and x64 Support

    - by Shawn Cicoria
    I was taking the Windows Azure CmdLets project and getting it into an MSI just to make it easier to deploy in a nice package.  I ran into problems with the Setup project not being able to properly establish the right registry settings for an x64 environment. Even though you set the target platform on the Setup project to x64 the InstallUtil.lib that get’s run is still x86.  In order to have it work property, you need to follow the steps identified here: http://msdn.microsoft.com/en-us/library/kz0ke5xt.aspx  The section “64-bit managed custom actions throw a System.BadImageFormatException exception” covers the steps you need to follow, using the Orca MSI editor to replace the InstallUtilLib.dll from the one that the Setup Project embeds (x86) to a x64 version. Now, works like a charm… Resultant installer here: http://cicoria.com/Downloads/AzureManagementCmdletsInstall.msi The CmdLets are the same ones from the Training Kit – November 2010 release.

    Read the article

  • Using my own code in freelance projects.

    - by Witchunter
    I have been into freelance business for more than 2 years. While doing projects for other people, I've build a compilation of common tasks that I implement in projects and put them into code. It's kind of a library with some functions that I can reuse without having to rewrite the same thing dozen times. I'm talking about accessing Access databases, downloading information from FTP and similar stuff. Is this acceptable from a legal point of view? What's the difference in reusing the old code and rewriting it from the scratch (using you own brain again, therefore the exact same logic)? I do not hold any copyright to it, of course, and provide the source code for these classes to my clients.

    Read the article

  • What to do when issue-tracker is down?

    - by Pablo
    It has happened in our team that our issue-tracker is down. Happens about once a week now (yes, wow), and there's not really much we can do to get it back up, since it's hosted by our client in a different timezone. It sometimes takes several hours for it to be operative again. In the meanwhile, we can't really tell which issues we were working on, and in case we do, we cannot update those issues, as in moving them through the workflow, logging used hours, checking the issue's description, leaving comments, and so on. So the question is: how can we, as a team, work in the meanwhile so that when the issue-tracker is up again, we have the least possible hassle updating it with what we've been working?

    Read the article

< Previous Page | 663 664 665 666 667 668 669 670 671 672 673 674  | Next Page >